ALGO 3h, 1h, 2hThis script tracks the crossing of the 10EMA on the 3h timeframe and the 200EMA on the 1h timeframe to open LONGS and SHORTS. Whether those LONGS or SHORTS actually trigger is based on the first 2 EMA's position in relation to a 3rd "controller" EMA.
Search in scripts for "algo"
3MA Alignment Ribbon [AlgoTraderPro]3MA Alignment Ribbon is a clean, visual trend-structure indicator based on the alignment of three moving averages.
Instead of focusing on crossovers, it highlights trend quality and direction by dynamically coloring the moving averages and the ribbon between them.
The goal is simple : See at a glance whether the market structure is bullish, bearish, or neutral.
⸻
How It Works
The indicator plots three moving averages (Fast, Medium, Slow) and evaluates their alignment:
Bullish → Fast > Medium > Slow
Bearish → Fast < Medium < Slow
Neutral → Any other configuration
The moving averages and the filled ribbon between them are automatically colored based on this alignment, making trend conditions immediately visible.
⸻
Key Features
✅Multi-MA Support
Choose between:
EMA (default)
SMA
WMA
VWMA
RMA
HMA
✅ Alignment Modes
Strict: classic Fast > Medium > Slow logic
Tolerant: requires a minimum percentage separation between averages to reduce noise in ranging markets
✅ Optional Slope Filter
Filter signals by requiring the slow MA to be rising or falling, helping avoid false trend states during flat conditions.
✅ Higher Timeframe (HTF) Ribbon
Overlay the same 3MA alignment logic from a higher timeframe on your current chart:
Fully optional
Correctly calculated on the HTF itself
Option to use confirmed HTF bars only (reduces repaint confusion)
✅ Alerts & Visual Signals
Alerts when alignment turns Bullish, Bearish, or Neutral
Optional on-chart markers when the alignment flips
✅ Custom Styling
Fully customizable colors and transparency
Designed to remain readable without cluttering the chart
⸻
How to Use It
This indicator works best as:
A trend filter for discretionary trading
A context tool for entries using other setups
A market structure overlay for scalping, swing trading, or position trading
It does not generate buy/sell signals by itself — instead, it helps you trade aligned with market structure.
⸻
Default Settings
The default lengths (50 / 100 / 200 EMA) are suitable for most markets and timeframes, including crypto, forex, indices, and equities.
MACD Enhanced [DCAUT]█ MACD Enhanced
📊 ORIGINALITY & INNOVATION
The MACD Enhanced represents a significant improvement over traditional MACD implementations. While Gerald Appel's original MACD from the 1970s was limited to exponential moving averages (EMA), this enhanced version expands algorithmic options by supporting 21 different moving average calculations for both the main MACD line and signal line independently.
This improvement addresses an important limitation of traditional MACD: the inability to adapt the indicator's mathematical foundation to different market conditions. By allowing traders to select from algorithms ranging from simple moving averages (SMA) for stability to advanced adaptive filters like Kalman Filter for noise reduction, this implementation changes MACD from a fixed-algorithm tool into a flexible instrument that can be adjusted for specific market environments and trading strategies.
The enhanced histogram visualization system uses a four-color gradient that helps communicate momentum strength and direction more clearly than traditional single-color histograms.
📐 MATHEMATICAL FOUNDATION
The core calculation maintains the proven MACD formula: Fast MA(source, fastLength) - Slow MA(source, slowLength), but extends it with algorithmic flexibility. The signal line applies the selected smoothing algorithm to the MACD line over the specified signal period, while the histogram represents the difference between MACD and signal lines.
Available Algorithms:
The implementation supports a comprehensive spectrum of technical analysis algorithms:
Basic Averages: SMA (arithmetic mean), EMA (exponential weighting), RMA (Wilder's smoothing), WMA (linear weighting)
Advanced Averages: HMA (Hull's low-lag), VWMA (volume-weighted), ALMA (Arnaud Legoux adaptive)
Mathematical Filters: LSMA (least squares regression), DEMA (double exponential), TEMA (triple exponential), ZLEMA (zero-lag exponential)
Adaptive Systems: T3 (Tillson T3), FRAMA (fractal adaptive), KAMA (Kaufman adaptive), MCGINLEY_DYNAMIC (reactive to volatility)
Signal Processing: ULTIMATE_SMOOTHER (low-pass filter), LAGUERRE_FILTER (four-pole IIR), SUPER_SMOOTHER (two-pole Butterworth), KALMAN_FILTER (state-space estimation)
Specialized: TMA (triangular moving average), LAGUERRE_BINOMIAL_FILTER (binomial smoothing)
Each algorithm responds differently to price action, allowing traders to match the indicator's behavior to market characteristics: trending markets benefit from responsive algorithms like EMA or HMA, while ranging markets require stable algorithms like SMA or RMA.
📊 COMPREHENSIVE SIGNAL ANALYSIS
Histogram Interpretation:
Positive Values: Indicate bullish momentum when MACD line exceeds signal line, suggesting upward price pressure and potential buying opportunities
Negative Values: Reflect bearish momentum when MACD line falls below signal line, indicating downward pressure and potential selling opportunities
Zero Line Crosses: MACD crossing above zero suggests transition to bullish bias, while crossing below indicates bearish bias shift
Momentum Changes: Rising histogram (regardless of positive/negative) signals accelerating momentum in the current direction, while declining histogram warns of momentum deceleration
Advanced Signal Recognition:
Divergences: Price making new highs/lows while MACD fails to confirm often precedes trend reversals
Convergence Patterns: MACD line approaching signal line suggests impending crossover and potential trade setup
Histogram Peaks: Extreme histogram values often mark momentum exhaustion points and potential reversal zones
🎯 STRATEGIC APPLICATIONS
Comprehensive Trend Confirmation Strategies:
Primary Trend Validation Protocol:
Identify primary trend direction using higher timeframe (4H or Daily) MACD position relative to zero line
Confirm trend strength by analyzing histogram progression: consistent expansion indicates strong momentum, contraction suggests weakening
Use secondary confirmation from MACD line angle: steep angles (>45°) indicate strong trends, shallow angles suggest consolidation
Validate with price structure: trending markets show consistent higher highs/higher lows (uptrend) or lower highs/lower lows (downtrend)
Entry Timing Techniques:
Pullback Entries in Uptrends: Wait for MACD histogram to decline toward zero line without crossing, then enter on histogram expansion with MACD line still above zero
Breakout Confirmations: Use MACD line crossing above zero as confirmation of upward breakouts from consolidation patterns
Continuation Signals: Look for MACD line re-acceleration (steepening angle) after brief consolidation periods as trend continuation signals
Advanced Divergence Trading Systems:
Regular Divergence Recognition:
Bullish Regular Divergence: Price creates lower lows while MACD line forms higher lows. This pattern is traditionally considered a potential upward reversal signal, but should be combined with other confirmation signals
Bearish Regular Divergence: Price makes higher highs while MACD shows lower highs. This pattern is traditionally considered a potential downward reversal signal, but trading decisions should incorporate proper risk management
Hidden Divergence Strategies:
Bullish Hidden Divergence: Price shows higher lows while MACD displays lower lows, indicating trend continuation potential. Use for adding to existing long positions during pullbacks
Bearish Hidden Divergence: Price creates lower highs while MACD forms higher highs, suggesting downtrend continuation. Optimal for adding to short positions during bear market rallies
Multi-Timeframe Coordination Framework:
Three-Timeframe Analysis Structure:
Primary Timeframe (Daily): Determine overall market bias and major trend direction. Only trade in alignment with daily MACD direction
Secondary Timeframe (4H): Identify intermediate trend changes and major entry opportunities. Use for position sizing decisions
Execution Timeframe (1H): Precise entry and exit timing. Look for MACD line crossovers that align with higher timeframe bias
Timeframe Synchronization Rules:
Daily MACD above zero + 4H MACD rising = Strong uptrend context for long positions
Daily MACD below zero + 4H MACD declining = Strong downtrend context for short positions
Conflicting signals between timeframes = Wait for alignment or use smaller position sizes
1H MACD signals only valid when aligned with both higher timeframes
Algorithm Considerations by Market Type:
Trending Markets: Responsive algorithms like EMA, HMA may be considered, but effectiveness should be tested for specific market conditions
Volatile Markets: Noise-reducing algorithms like KALMAN_FILTER, SUPER_SMOOTHER may help reduce false signals, though results vary by market
Range-Bound Markets: Stability-focused algorithms like SMA, RMA may provide smoother signals, but individual testing is required
Short Timeframes: Low-lag algorithms like ZLEMA, T3 theoretically respond faster but may also increase noise
Important Note: All algorithm choices and parameter settings should be thoroughly backtested and validated based on specific trading strategies, market conditions, and individual risk tolerance. Different market environments and trading styles may require different configuration approaches.
📋 DETAILED PARAMETER CONFIGURATION
Comprehensive Source Selection Strategy:
Price Source Analysis and Optimization:
Close Price (Default): Most commonly used, reflects final market sentiment of each period. Best for end-of-day analysis, swing trading, daily/weekly timeframes. Advantages: widely accepted standard, good for backtesting comparisons. Disadvantages: ignores intraday price action, may miss important highs/lows
HL2 (High+Low)/2: Midpoint of the trading range, reduces impact of opening gaps and closing spikes. Best for volatile markets, gap-prone assets, forex markets. Calculation impact: smoother MACD signals, reduced noise from price spikes. Optimal when asset shows frequent gaps, high volatility during specific sessions
HLC3 (High+Low+Close)/3: Weighted average emphasizing the close while including range information. Best for balanced analysis, most asset classes, medium-term trading. Mathematical effect: 33% weight to high/low, 33% to close, provides compromise between close and HL2. Use when standard close is too noisy but HL2 is too smooth
OHLC4 (Open+High+Low+Close)/4: True average of all price points, most comprehensive view. Best for complete price representation, algorithmic trading, statistical analysis. Considerations: includes opening sentiment, smoothest of all options but potentially less responsive. Optimal for markets with significant opening moves, comprehensive trend analysis
Parameter Configuration Principles:
Important Note: Different moving average algorithms have distinct mathematical characteristics and response patterns. The same parameter settings may produce vastly different results when using different algorithms. When switching algorithms, parameter settings should be re-evaluated and tested for appropriateness.
Length Parameter Considerations:
Fast Length (Default 12): Shorter periods provide faster response but may increase noise and false signals, longer periods offer more stable signals but slower response, different algorithms respond differently to the same parameters and may require adjustment
Slow Length (Default 26): Should maintain a reasonable proportional relationship with fast length, different timeframes may require different parameter configurations, algorithm characteristics influence optimal length settings
Signal Length (Default 9): Shorter lengths produce more frequent crossovers but may increase false signals, longer lengths provide better signal confirmation but slower response, should be adjusted based on trading style and chosen algorithm characteristics
Comprehensive Algorithm Selection Framework:
MACD Line Algorithm Decision Matrix:
EMA (Standard Choice): Mathematical properties: exponential weighting, recent price emphasis. Best for general use, traditional MACD behavior, backtesting compatibility. Performance characteristics: good balance of speed and smoothness, widely understood behavior
SMA (Stability Focus): Equal weighting of all periods, maximum smoothness. Best for ranging markets, noise reduction, conservative trading. Trade-offs: slower signal generation, reduced sensitivity to recent price changes
HMA (Speed Optimized): Hull Moving Average, designed for reduced lag. Best for trending markets, quick reversals, active trading. Technical advantage: square root period weighting, faster trend detection. Caution: can be more sensitive to noise
KAMA (Adaptive): Kaufman Adaptive MA, adjusts smoothing based on market efficiency. Best for varying market conditions, algorithmic trading. Mechanism: fast smoothing in trends, slow smoothing in sideways markets. Complexity: requires understanding of efficiency ratio
Signal Line Algorithm Optimization Strategies:
Matching Strategy: Use same algorithm for both MACD and signal lines. Benefits: consistent mathematical properties, predictable behavior. Best when backtesting historical strategies, maintaining traditional MACD characteristics
Contrast Strategy: Use different algorithms for optimization. Common combinations: MACD=EMA, Signal=SMA for smoother crossovers, MACD=HMA, Signal=RMA for balanced speed/stability, Advanced: MACD=KAMA, Signal=T3 for adaptive behavior with smooth signals
Market Regime Adaptation: Trending markets: both fast algorithms (EMA/HMA), Volatile markets: MACD=KALMAN_FILTER, Signal=SUPER_SMOOTHER, Range-bound: both slow algorithms (SMA/RMA)
Parameter Sensitivity Considerations:
Impact of Parameter Changes:
Length Parameter Sensitivity: Small parameter adjustments can significantly affect signal timing, while larger adjustments may fundamentally change indicator behavior characteristics
Algorithm Sensitivity: Different algorithms produce different signal characteristics. Thoroughly test the impact on your trading strategy before switching algorithms
Combined Effects: Changing multiple parameters simultaneously can create unexpected effects. Recommendation: adjust parameters one at a time and thoroughly test each change
📈 PERFORMANCE ANALYSIS & COMPETITIVE ADVANTAGES
Response Characteristics by Algorithm:
Fastest Response: ZLEMA, HMA, T3 - minimal lag but higher noise
Balanced Performance: EMA, DEMA, TEMA - good trade-off between speed and stability
Highest Stability: SMA, RMA, TMA - reduced noise but increased lag
Adaptive Behavior: KAMA, FRAMA, MCGINLEY_DYNAMIC - automatically adjust to market conditions
Noise Filtering Capabilities:
Advanced algorithms like KALMAN_FILTER and SUPER_SMOOTHER help reduce false signals compared to traditional EMA-based MACD. Noise-reducing algorithms can provide more stable signals in volatile market conditions, though results will vary based on market conditions and parameter settings.
Market Condition Adaptability:
Unlike fixed-algorithm MACD, this enhanced version allows real-time optimization. Trending markets benefit from responsive algorithms (EMA, HMA), while ranging markets perform better with stable algorithms (SMA, RMA). The ability to switch algorithms without changing indicators provides greater flexibility.
Comparative Performance vs Traditional MACD:
Algorithm Flexibility: 21 algorithms vs 1 fixed EMA
Signal Quality: Reduced false signals through noise filtering algorithms
Market Adaptability: Optimizable for any market condition vs fixed behavior
Customization Options: Independent algorithm selection for MACD and signal lines vs forced matching
Professional Features: Advanced color coding, multiple alert conditions, comprehensive parameter control
USAGE NOTES
This indicator is designed for technical analysis and educational purposes. Like all technical indicators, it has limitations and should not be used as the sole basis for trading decisions. Algorithm performance varies with market conditions, and past characteristics do not guarantee future results. Always combine with proper risk management and thorough strategy testing.
ORB Fusion🎯 CORE INNOVATION: INSTITUTIONAL ORB FRAMEWORK WITH FAILED BREAKOUT INTELLIGENCE
ORB Fusion represents a complete institutional-grade Opening Range Breakout system combining classic Market Profile concepts (Initial Balance, day type classification) with modern algorithmic breakout detection, failed breakout reversal logic, and comprehensive statistical tracking. Rather than simply drawing lines at opening range extremes, this system implements the full trading methodology used by professional floor traders and market makers—including the critical concept that failed breakouts are often higher-probability setups than successful breakouts .
The Opening Range Hypothesis:
The first 30-60 minutes of trading establishes the day's value area —the price range where the majority of participants agree on fair value. This range is formed during peak information flow (overnight news digestion, gap reactions, early institutional positioning). Breakouts from this range signal directional conviction; failures to hold breakouts signal trapped participants and create exploitable reversals.
Why Opening Range Matters:
1. Information Aggregation : Opening range reflects overnight news, pre-market sentiment, and early institutional orders. It's the market's initial "consensus" on value.
2. Liquidity Concentration : Stop losses cluster just outside opening range. Breakouts trigger these stops, creating momentum. Failed breakouts trap traders, forcing reversals.
3. Statistical Persistence : Markets exhibit range expansion tendency —when price accepts above/below opening range with volume, it often extends 1.0-2.0x the opening range size before mean reversion.
4. Institutional Behavior : Large players (market makers, institutions) use opening range as reference for the day's trading plan. They fade extremes in rotation days and follow breakouts in trend days.
Historical Context:
Opening Range Breakout methodology originated in commodity futures pits (1970s-80s) where floor traders noticed consistent patterns: the first 30-60 minutes established a "fair value zone," and directional moves occurred when this zone was violated with conviction. J. Peter Steidlmayer formalized this observation in Market Profile theory, introducing the "Initial Balance" concept—the first hour (two 30-minute periods) defining market structure.
📊 OPENING RANGE CONSTRUCTION
Four ORB Timeframe Options:
1. 5-Minute ORB (0930-0935 ET):
Captures immediate market direction during "opening drive"—the explosive first few minutes when overnight orders hit the tape.
Use Case:
• Scalping strategies
• High-frequency breakout trading
• Extremely liquid instruments (ES, NQ, SPY)
Characteristics:
• Very tight range (often 0.2-0.5% of price)
• Early breakouts common (7 of 10 days break within first hour)
• Higher false breakout rate (50-60%)
• Requires sub-minute chart monitoring
Psychology: Captures panic buyers/sellers reacting to overnight news. Range is small because sample size is minimal—only 5 minutes of price discovery. Early breakouts often fail because they're driven by retail FOMO rather than institutional conviction.
2. 15-Minute ORB (0930-0945 ET):
Balances responsiveness with statistical validity. Captures opening drive plus initial reaction to that drive.
Use Case:
• Day trading strategies
• Balanced scalping/swing hybrid
• Most liquid instruments
Characteristics:
• Moderate range (0.4-0.8% of price typically)
• Breakout rate ~60% of days
• False breakout rate ~40-45%
• Good balance of opportunity and reliability
Psychology: Includes opening panic AND the first retest/consolidation. Sophisticated traders (institutions, algos) start expressing directional bias. This is the "Goldilocks" timeframe—not too reactive, not too slow.
3. 30-Minute ORB (0930-1000 ET):
Classic ORB timeframe. Default for most professional implementations.
Use Case:
• Standard intraday trading
• Position sizing for full-day trades
• All liquid instruments (equities, indices, futures)
Characteristics:
• Substantial range (0.6-1.2% of price)
• Breakout rate ~55% of days
• False breakout rate ~35-40%
• Statistical sweet spot for extensions
Psychology: Full opening auction + first institutional repositioning complete. By 10:00 AM ET, headlines are digested, early stops are hit, and "real" directional players reveal themselves. This is when institutional programs typically finish their opening positioning.
Statistical Advantage: 30-minute ORB shows highest correlation with daily range. When price breaks and holds outside 30m ORB, probability of reaching 1.0x extension (doubling the opening range) exceeds 60% historically.
4. 60-Minute ORB (0930-1030 ET) - Initial Balance:
Steidlmayer's "Initial Balance"—the foundation of Market Profile theory.
Use Case:
• Swing trading entries
• Day type classification
• Low-frequency institutional setups
Characteristics:
• Wide range (0.8-1.5% of price)
• Breakout rate ~45% of days
• False breakout rate ~25-30% (lowest)
• Best for trend day identification
Psychology: Full first hour captures A-period (0930-1000) and B-period (1000-1030). By 10:30 AM ET, all early positioning is complete. Market has "voted" on value. Subsequent price action confirms (trend day) or rejects (rotation day) this value assessment.
Initial Balance Theory:
IB represents the market's accepted value area . When price extends significantly beyond IB (>1.5x IB range), it signals a Trend Day —strong directional conviction. When price remains within 1.0x IB, it signals a Rotation Day —mean reversion environment. This classification completely changes trading strategy.
🔬 LTF PRECISION TECHNOLOGY
The Chart Timeframe Problem:
Traditional ORB indicators calculate range using the chart's current timeframe. This creates critical inaccuracies:
Example:
• You're on a 5-minute chart
• ORB period is 30 minutes (0930-1000 ET)
• Indicator sees only 6 bars (30min ÷ 5min/bar = 6 bars)
• If any 5-minute bar has extreme wick, entire ORB is distorted
The Problem Amplifies:
• On 15-minute chart with 30-minute ORB: Only 2 bars sampled
• On 30-minute chart with 30-minute ORB: Only 1 bar sampled
• Opening spike or single large wick defines entire range (invalid)
Solution: Lower Timeframe (LTF) Precision:
ORB Fusion uses `request.security_lower_tf()` to sample 1-minute bars regardless of chart timeframe:
```
For 30-minute ORB on 15-minute chart:
- Traditional method: Uses 2 bars (15min × 2 = 30min)
- LTF Precision: Requests thirty 1-minute bars, calculates true high/low
```
Why This Matters:
Scenario: ES futures, 15-minute chart, 30-minute ORB
• Traditional ORB: High = 5850.00, Low = 5842.00 (range = 8 points)
• LTF Precision ORB: High = 5848.50, Low = 5843.25 (range = 5.25 points)
Difference: 2.75 points distortion from single 15-minute wick hitting 5850.00 at 9:31 AM then immediately reversing. LTF precision filters this out by seeing it was a fleeting wick, not a sustained high.
Impact on Extensions:
With inflated range (8 points vs 5.25 points):
• 1.5x extension projects +12 points instead of +7.875 points
• Difference: 4.125 points (nearly $200 per ES contract)
• Breakout signals trigger late; extension targets unreachable
Implementation:
```pinescript
getLtfHighLow() =>
float ha = request.security_lower_tf(syminfo.tickerid, "1", high)
float la = request.security_lower_tf(syminfo.tickerid, "1", low)
```
Function returns arrays of 1-minute high/low values, then finds true maximum and minimum across all samples.
When LTF Precision Activates:
Only when chart timeframe exceeds ORB session window:
• 5-minute chart + 30-minute ORB: LTF used (chart TF > session bars needed)
• 1-minute chart + 30-minute ORB: LTF not needed (direct sampling sufficient)
Recommendation: Always enable LTF Precision unless you're on 1-minute charts. The computational overhead is negligible, and accuracy improvement is substantial.
⚖️ INITIAL BALANCE (IB) FRAMEWORK
Steidlmayer's Market Profile Innovation:
J. Peter Steidlmayer developed Market Profile in the 1980s for the Chicago Board of Trade. His key insight: market structure is best understood through time-at-price (value area) rather than just price-over-time (traditional charts).
Initial Balance Definition:
IB is the price range established during the first hour of trading, subdivided into:
• A-Period : First 30 minutes (0930-1000 ET for US equities)
• B-Period : Second 30 minutes (1000-1030 ET)
A-Period vs B-Period Comparison:
The relationship between A and B periods forecasts the day:
B-Period Expansion (Bullish):
• B-period high > A-period high
• B-period low ≥ A-period low
• Interpretation: Buyers stepping in after opening assessed
• Implication: Bullish continuation likely
• Strategy: Buy pullbacks to A-period high (now support)
B-Period Expansion (Bearish):
• B-period low < A-period low
• B-period high ≤ A-period high
• Interpretation: Sellers stepping in after opening assessed
• Implication: Bearish continuation likely
• Strategy: Sell rallies to A-period low (now resistance)
B-Period Contraction:
• B-period stays within A-period range
• Interpretation: Market indecisive, digesting A-period information
• Implication: Rotation day likely, stay range-bound
• Strategy: Fade extremes, sell high/buy low within IB
IB Extensions:
Professional traders use IB as a ruler to project price targets:
Extension Levels:
• 0.5x IB : Initial probe outside value (minor target)
• 1.0x IB : Full extension (major target for normal days)
• 1.5x IB : Trend day threshold (classifies as trending)
• 2.0x IB : Strong trend day (rare, ~10-15% of days)
Calculation:
```
IB Range = IB High - IB Low
Bull Extension 1.0x = IB High + (IB Range × 1.0)
Bear Extension 1.0x = IB Low - (IB Range × 1.0)
```
Example:
ES futures:
• IB High: 5850.00
• IB Low: 5842.00
• IB Range: 8.00 points
Extensions:
• 1.0x Bull Target: 5850 + 8 = 5858.00
• 1.5x Bull Target: 5850 + 12 = 5862.00
• 2.0x Bull Target: 5850 + 16 = 5866.00
If price reaches 5862.00 (1.5x), day is classified as Trend Day —strategy shifts from mean reversion to trend following.
📈 DAY TYPE CLASSIFICATION SYSTEM
Four Day Types (Market Profile Framework):
1. TREND DAY:
Definition: Price extends ≥1.5x IB range in one direction and stays there.
Characteristics:
• Opens and never returns to IB
• Persistent directional movement
• Volume increases as day progresses (conviction building)
• News-driven or strong institutional flow
Frequency: ~20-25% of trading days
Trading Strategy:
• DO: Follow the trend, trail stops, let winners run
• DON'T: Fade extremes, take early profits
• Key: Add to position on pullbacks to previous extension level
• Risk: Getting chopped in false trend (see Failed Breakout section)
Example: FOMC decision, payroll report, earnings surprise—anything creating one-sided conviction.
2. NORMAL DAY:
Definition: Price extends 0.5-1.5x IB, tests both sides, returns to IB.
Characteristics:
• Two-sided trading
• Extensions occur but don't persist
• Volume balanced throughout day
• Most common day type
Frequency: ~45-50% of trading days
Trading Strategy:
• DO: Take profits at extension levels, expect reversals
• DON'T: Hold for massive moves
• Key: Treat each extension as a profit-taking opportunity
• Risk: Holding too long when momentum shifts
Example: Typical day with no major catalysts—market balancing supply and demand.
3. ROTATION DAY:
Definition: Price stays within IB all day, rotating between high and low.
Characteristics:
• Never accepts outside IB
• Multiple tests of IB high/low
• Decreasing volume (no conviction)
• Classic range-bound action
Frequency: ~25-30% of trading days
Trading Strategy:
• DO: Fade extremes (sell IB high, buy IB low)
• DON'T: Chase breakouts
• Key: Enter at extremes with tight stops just outside IB
• Risk: Breakout finally occurs after multiple failures
Example: [/b> Pre-holiday trading, summer doldrums, consolidation after big move.
4. DEVELOPING:
Definition: Day type not yet determined (early in session).
Usage: Classification before 12:00 PM ET when IB extension pattern unclear.
ORB Fusion's Classification Algorithm:
```pinescript
if close > ibHigh:
ibExtension = (close - ibHigh) / ibRange
direction = "BULLISH"
else if close < ibLow:
ibExtension = (ibLow - close) / ibRange
direction = "BEARISH"
if ibExtension >= 1.5:
dayType = "TREND DAY"
else if ibExtension >= 0.5:
dayType = "NORMAL DAY"
else if close within IB:
dayType = "ROTATION DAY"
```
Why Classification Matters:
Same setup (bullish ORB breakout) has opposite implications:
• Trend Day : Hold for 2.0x extension, trail stops aggressively
• Normal Day : Take profits at 1.0x extension, watch for reversal
• Rotation Day : Fade the breakout immediately (likely false)
Knowing day type prevents catastrophic errors like fading a trend day or holding through rotation.
🚀 BREAKOUT DETECTION & CONFIRMATION
Three Confirmation Methods:
1. Close Beyond Level (Recommended):
Logic: Candle must close above ORB high (bull) or below ORB low (bear).
Why:
• Filters out wicks (temporary liquidity grabs)
• Ensures sustained acceptance above/below range
• Reduces false breakout rate by ~20-30%
Example:
• ORB High: 5850.00
• Bar high touches 5850.50 (wick above)
• Bar closes at 5848.00 (inside range)
• Result: NO breakout signal
vs.
• Bar high touches 5850.50
• Bar closes at 5851.00 (outside range)
• Result: BREAKOUT signal confirmed
Trade-off: Slightly delayed entry (wait for close) but much higher reliability.
2. Wick Beyond Level:
Logic: [/b> Any touch of ORB high/low triggers breakout.
Why:
• Earliest possible entry
• Captures aggressive momentum moves
Risk:
• High false breakout rate (60-70%)
• Stop runs trigger signals
• Requires very tight stops (difficult to manage)
Use Case: Scalping with 1-2 point profit targets where any penetration = trade.
3. Body Beyond Level:
Logic: [/b> Candle body (close vs open) must be entirely outside range.
Why:
• Strictest confirmation
• Ensures directional conviction (not just momentum)
• Lowest false breakout rate
Example: Trade-off: [/b> Very conservative—misses some valid breakouts but rarely triggers on false ones.
Volume Confirmation Layer:
All confirmation methods can require volume validation:
Volume Multiplier Logic: Rationale: [/b> True breakouts are driven by institutional activity (large size). Volume spike confirms real conviction vs. stop-run manipulation.
Statistical Impact: [/b>
• Breakouts with volume confirmation: ~65% success rate
• Breakouts without volume: ~45% success rate
• Difference: 20 percentage points edge
Implementation Note: [/b>
Volume confirmation adds complexity—you'll miss breakouts that work but lack volume. However, when targeting 1.5x+ extensions (ambitious goals), volume confirmation becomes critical because those moves require sustained institutional participation.
Recommended Settings by Strategy: [/b>
Scalping (1-2 point targets): [/b>
• Method: Close
• Volume: OFF
• Rationale: Quick in/out doesn't need perfection
Intraday Swing (5-10 point targets): [/b>
• Method: Close
• Volume: ON (1.5x multiplier)
• Rationale: Balance reliability and opportunity
Position Trading (full-day holds): [/b>
• Method: Body
• Volume: ON (2.0x multiplier)
• Rationale: Must be certain—large stops require high win rate
🔥 FAILED BREAKOUT SYSTEM
The Core Insight: [/b>
Failed breakouts are often more profitable [/b> than successful breakouts because they create trapped traders with predictable behavior.
Failed Breakout Definition: [/b>
A breakout that:
1. Initially penetrates ORB level with confirmation
2. Attracts participants (volume spike, momentum)
3. Fails to extend (stalls or immediately reverses)
4. Returns inside ORB range within N bars
Psychology of Failure: [/b>
When breakout fails:
• Breakout buyers are trapped [/b>: Bought at ORB high, now underwater
• Early longs reduce: Take profit, fearful of reversal
• Shorts smell blood: See failed breakout as reversal signal
• Result: Cascade of selling as trapped bulls exit + new shorts enter
Mirror image for failed bearish breakouts (trapped shorts cover + new longs enter).
Failure Detection Parameters: [/b>
1. Failure Confirmation Bars (default: 3): [/b>
How many bars after breakout to confirm failure?
Logic: Settings: [/b>
• 2 bars: Aggressive failure detection (more signals, more false failures)
• 3 bars Balanced (default)
• 5-10 bars: Conservative (wait for clear reversal)
Why This Matters:
Too few bars: You call "failed breakout" when price is just consolidating before next leg.
Too many bars: You miss the reversal entry (price already back in range).
2. Failure Buffer (default: 0.1 ATR): [/b>
How far inside ORB must price return to confirm failure?
Formula: Why Buffer Matters: clear rejection [/b> (not just hovering at level).
Settings: [/b>
• 0.0 ATR: No buffer, immediate failure signal
• 0.1 ATR: Small buffer (default) - filters noise
• [b>0.2-0.3 ATR: Large buffer - only dramatic failures count
Example: Reversal Entry System: [/b>
When failure confirmed, system generates complete reversal trade:
For Failed Bull Breakout (Short Reversal): [/b>
Entry: [/b> Current close when failure confirmed
Stop Loss: [/b> Extreme high since breakout + 0.10 ATR padding
Target 1: [/b> ORB High - (ORB Range × 0.5)
Target 2: Target 3: [/b> ORB High - (ORB Range × 1.5)
Example:
• ORB High: 5850, ORB Low: 5842, Range: 8 points
• Breakout to 5853, fails, reverses to 5848 (entry)
• Stop: 5853 + 1 = 5854 (6 point risk)
• T1: 5850 - 4 = 5846 (-2 points, 1:3 R:R)
• T2: 5850 - 8 = 5842 (-6 points, 1:1 R:R)
• T3: 5850 - 12 = 5838 (-10 points, 1.67:1 R:R)
[b>Why These Targets? [/b>
• T1 (0.5x ORB below high): Trapped bulls start panic
• T2 (1.0x ORB = ORB Mid): Major retracement, momentum fully reversed
• T3 (1.5x ORB): Reversal extended, now targeting opposite side
Historical Performance: [/b>
Failed breakout reversals in ORB Fusion's tracking system show:
• Win Rate: 65-75% (significantly higher than initial breakouts)
• Average Winner: 1.2x ORB range
• Average Loser: 0.5x ORB range (protected by stop at extreme)
• Expectancy: Strongly positive even with <70% win rate
Why Failed Breakouts Outperform: [/b>
1. Information Advantage: You now know what price did (failed to extend). Initial breakout trades are speculative; reversal trades are reactive to confirmed failure.
2. Trapped Participant Pressure: Every trapped bull becomes a seller. This creates sustained pressure.
3. Stop Loss Clarity: Extreme high is obvious stop (just beyond recent high). Breakout trades have ambiguous stops (ORB mid? Recent low? Too wide or too tight).
4. Mean Reversion Edge: Failed breakouts return to value (ORB mid). Initial breakouts try to escape value (harder to sustain).
Critical Insight: [/b>
"The best trade is often the one that trapped everyone else."
Failed breakouts create asymmetric opportunity because you're trading against [/b> trapped participants rather than with [/b> them. When you see a failed breakout signal, you're seeing real-time evidence that the market rejected directional conviction—that's exploitable.
📐 FIBONACCI EXTENSION SYSTEM
Six Extension Levels: [/b>
Extensions project how far price will travel after ORB breakout. Based on Fibonacci ratios + empirical market behavior.
1. 1.272x (27.2% Extension): [/b>
Formula: [/b> ORB High/Low + (ORB Range × 0.272)
Psychology: [/b> Initial probe beyond ORB. Early momentum + trapped shorts (on bull side) covering.
Probability of Reach: [/b> ~75-80% after confirmed breakout
Trading: [/b>
• First resistance/support after breakout
• Partial profit target (take 30-50% off)
• Watch for rejection here (could signal failure in progress)
Why 1.272? [/b> Related to harmonic patterns (1.272 is √1.618). Empirically, markets often stall at 25-30% extension before deciding whether to continue or fail.
2. 1.5x (50% Extension):
Formula: [/b> ORB High/Low + (ORB Range × 0.5)
Psychology: [/b> Breakout gaining conviction. Requires sustained buying/selling (not just momentum spike).
Probability of Reach: [/b> ~60-65% after confirmed breakout
Trading: [/b>
• Major partial profit (take 50-70% off)
• Move stops to breakeven
• Trail remaining position
Why 1.5x? [/b> Classic halfway point to 2.0x. Markets often consolidate here before final push. If day type is "Normal," this is likely the high/low for the day.
3. 1.618x (Golden Ratio Extension): [/b>
Formula: [/b> ORB High/Low + (ORB Range × 0.618)
Psychology: [/b> Strong directional day. Institutional conviction + retail FOMO.
Probability of Reach: [/b> ~45-50% after confirmed breakout
Trading: [/b>
• Final partial profit (close 80-90%)
• Trail remainder with wide stop (allow breathing room)
Why 1.618? [/b> Fibonacci golden ratio. Appears consistently in market geometry. When price reaches 1.618x extension, move is "mature" and reversal risk increases.
4. 2.0x (100% Extension): [/b>
Formula: ORB High/Low + (ORB Range × 1.0)
Psychology: [/b> Trend day confirmed. Opening range completely duplicated.
Probability of Reach: [/b> ~30-35% after confirmed breakout
Trading: Why 2.0x? [/b> Psychological level—range doubled. Also corresponds to typical daily ATR in many instruments (opening range ~ 0.5 ATR, daily range ~ 1.0 ATR).
5. 2.618x (Super Extension):
Formula: [/b> ORB High/Low + (ORB Range × 1.618)
Psychology: [/b> Parabolic move. News-driven or squeeze.
Probability of Reach: [/b> ~10-15% after confirmed breakout
[b>Trading: Why 2.618? [/b> Fibonacci ratio (1.618²). Rare to reach—when it does, move is extreme. Often precedes multi-day consolidation or reversal.
6. 3.0x (Extreme Extension): [/b>
Formula: [/b> ORB High/Low + (ORB Range × 2.0)
Psychology: [/b> Market melt-up/crash. Only in extreme events.
[b>Probability of Reach: [/b> <5% after confirmed breakout
Trading: [/b>
• Close immediately if reached
• These are outlier events (black swans, flash crashes, squeeze-outs)
• Holding for more is greed—take windfall profit
Why 3.0x? [/b> Triple opening range. So rare it's statistical noise. When it happens, it's headline news.
Visual Example:
ES futures, ORB 5842-5850 (8 point range), Bullish breakout:
• ORB High : 5850.00 (entry zone)
• 1.272x : 5850 + 2.18 = 5852.18 (first resistance)
• 1.5x : 5850 + 4.00 = 5854.00 (major target)
• 1.618x : 5850 + 4.94 = 5854.94 (strong target)
• 2.0x : 5850 + 8.00 = 5858.00 (trend day)
• 2.618x : 5850 + 12.94 = 5862.94 (extreme)
• 3.0x : 5850 + 16.00 = 5866.00 (parabolic)
Profit-Taking Strategy:
Optimal scaling out at extensions:
• Breakout entry at 5850.50
• 30% off at 1.272x (5852.18) → +1.68 points
• 40% off at 1.5x (5854.00) → +3.50 points
• 20% off at 1.618x (5854.94) → +4.44 points
• 10% off at 2.0x (5858.00) → +7.50 points
[b>Average Exit: Conclusion: [/b> Scaling out at extensions produces 40% higher expectancy than holding for home runs.
📊 GAP ANALYSIS & FILL PSYCHOLOGY
[b>Gap Definition: [/b>
Price discontinuity between previous close and current open:
• Gap Up : Open > Previous Close + noise threshold (0.1 ATR)
• Gap Down : Open < Previous Close - noise threshold
Why Gaps Matter: [/b>
Gaps represent unfilled orders [/b>. When market gaps up, all limit buy orders between yesterday's close and today's open are never filled. Those buyers are "left behind." Psychology: they wait for price to return ("fill the gap") so they can enter. This creates magnetic pull [/b> toward gap level.
Gap Fill Statistics (Empirical): [/b>
• Gaps <0.5% [/b>: 85-90% fill within same day
• Gaps 0.5-1.0% [/b>: 70-75% fill within same day, 90%+ within week
• Gaps >1.0% [/b>: 50-60% fill within same day (major news often prevents fill)
Gap Fill Strategy: [/b>
Setup 1: Gap-and-Go
Gap opens, extends away from gap (doesn't fill).
• ORB confirms direction away from gap
• Trade WITH ORB breakout direction
• Expectation: Gap won't fill today (momentum too strong)
Setup 2: Gap-Fill Fade
Gap opens, but fails to extend. Price drifts back toward gap.
• ORB breakout TOWARD gap (not away)
• Trade toward gap fill level
• Target: Previous close (gap fill complete)
Setup 3: Gap-Fill Rejection
Gap fills (touches previous close) then rejects.
• ORB breakout AWAY from gap after fill
• Trade away from gap direction
• Thesis: Gap filled (orders executed), now resume original direction
[b>Example: Scenario A (Gap-and-Go):
• ORB breaks upward to $454 (away from gap)
• Trade: LONG breakout, expect continued rally
• Gap becomes support ($452)
Scenario B (Gap-Fill):
• ORB breaks downward through $452.50 (toward gap)
• Trade: SHORT toward gap fill at $450.00
• Target: $450.00 (gap filled), close position
Scenario C (Gap-Fill Rejection):
• Price drifts to $450.00 (gap filled) early in session
• ORB establishes $450-$451 after gap fill
• ORB breaks upward to $451.50
• Trade: LONG breakout (gap is filled, now resume rally)
ORB Fusion Integration: [/b>
Dashboard shows:
• Gap type (Up/Down/None)
• Gap size (percentage)
• Gap fill status (Filled ✓ / Open)
This informs setup confidence:
• ORB breakout AWAY from unfilled gap: +10% confidence (gap becomes support/resistance)
• ORB breakout TOWARD unfilled gap: -10% confidence (gap fill may override ORB)
[b>📈 VWAP & INSTITUTIONAL BIAS [/b>
[b>Volume-Weighted Average Price (VWAP): [/b>
Average price weighted by volume at each price level. Represents true "average" cost for the day.
[b>Calculation: Institutional Benchmark [/b>: Institutions (mutual funds, pension funds) use VWAP as performance benchmark. If they buy above VWAP, they underperformed; below VWAP, they outperformed.
2. [b>Algorithmic Target [/b>: Many algos are programmed to buy below VWAP and sell above VWAP to achieve "fair" execution.
3. [b>Support/Resistance [/b>: VWAP acts as dynamic support (price above) or resistance (price below).
[b>VWAP Bands (Standard Deviations): [/b>
• [b>1σ Band [/b>: VWAP ± 1 standard deviation
- Contains ~68% of volume
- Normal trading range
- Bounces common
• [b>2σ Band [/b>: VWAP ± 2 standard deviations
- Contains ~95% of volume
- Extreme extension
- Mean reversion likely
ORB + VWAP Confluence: [/b>
Highest-probability setups occur when ORB and VWAP align:
Bullish Confluence: [/b>
• ORB breakout upward (bullish signal)
• Price above VWAP (institutional buying)
• Confidence boost: +15%
Bearish Confluence: [/b>
• ORB breakout downward (bearish signal)
• Price below VWAP (institutional selling)
• Confidence boost: +15%
[b>Divergence Warning:
• ORB breakout upward BUT price below VWAP
• Conflict: Breakout says "buy," VWAP says "sell"
• Confidence penalty: -10%
• Interpretation: Retail buying but institutions not participating (lower quality breakout)
📊 MOMENTUM CONTEXT SYSTEM
[b>Innovation: Candle Coloring by Position
Rather than fixed support/resistance lines, ORB Fusion colors candles based on their [b>relationship to ORB :
[b>Three Zones: [/b>
1. Inside ORB (Blue Boxes): [/b>
[b>Calculation:
• Darker blue: Near extremes of ORB (potential breakout imminent)
• Lighter blue: Near ORB mid (consolidation)
[b>Trading: [/b> Coiled spring—await breakout.
[b>2. Above ORB (Green Boxes):
[b>Calculation: 3. Below ORB (Red Boxes):
Mirror of above ORB logic.
[b>Special Contexts: [/b>
[b>Breakout Bar (Darkest Green/Red): [/b>
The specific bar where breakout occurs gets maximum color intensity regardless of distance. This highlights the pivotal moment.
[b>Failed Breakout Bar (Orange/Warning): [/b>
When failed breakout is confirmed, that bar gets orange/warning color. Visual alert: "reversal opportunity here."
[b>Near Extension (Cyan/Magenta Tint): [/b>
When price is within 0.5 ATR of an extension level, candle gets tinted cyan (bull) or magenta (bear). Indicates "target approaching—prepare to take profit."
[b>Why Visual Context? [/b>
Traditional indicators show lines. ORB Fusion shows [b>context-aware momentum [/b>. Glance at chart:
• Lots of blue? Consolidation day (fade extremes).
• Progressive green? Trend day (follow).
• Green then orange? Failed breakout (reversal setup).
This visual language communicates market state instantly—no interpretation needed.
🎯 TRADE SETUP GENERATION & GRADING [/b>
[b>Algorithmic Setup Detection: [/b>
ORB Fusion continuously evaluates market state and generates current best trade setup with:
• Action (LONG / SHORT / FADE HIGH / FADE LOW / WAIT)
• Entry price
• Stop loss
• Three targets
• Risk:Reward ratio
• Confidence score (0-100)
• Grade (A+ to D)
[b>Setup Types: [/b>
[b>1. ORB LONG (Bullish Breakout): [/b>
[b>Trigger: [/b>
• Bullish ORB breakout confirmed
• Not failed
[b>Parameters:
• Entry: Current close
• Stop: ORB mid (protects against failure)
• T1: ORB High + 0.5x range (1.5x extension)
• T2: ORB High + 1.0x range (2.0x extension)
• T3: ORB High + 1.618x range (2.618x extension)
[b>Confidence Scoring:
[b>Trigger: [/b>
• Bearish breakout occurred
• Failed (returned inside ORB)
[b>Parameters: [/b>
• Entry: Close when failure confirmed
• Stop: Extreme low since breakout + 0.10 ATR
• T1: ORB Low + 0.5x range
• T2: ORB Low + 1.0x range (ORB mid)
• T3: ORB Low + 1.5x range
[b>Confidence Scoring:
[b>Trigger:
• Inside ORB
• Close > ORB mid (near high)
[b>Parameters: [/b>
• Entry: ORB High (limit order)
• Stop: ORB High + 0.2x range
• T1: ORB Mid
• T2: ORB Low
[b>Confidence Scoring: [/b>
Base: 40 points (lower base—range fading is lower probability than breakout/reversal)
[b>Use Case: [/b> Rotation days. Not recommended on normal/trend days.
[b>6. FADE LOW (Range Trade):
Mirror of FADE HIGH.
[b>7. WAIT:
[b>Trigger: [/b>
• ORB not complete yet OR
• No clear setup (price in no-man's-land)
[b>Action: [/b> Observe, don't trade.
[b>Confidence: [/b> 0 points
[b>Grading System:
```
Confidence → Grade
85-100 → A+
75-84 → A
65-74 → B+
55-64 → B
45-54 → C
0-44 → D
```
[b>Grade Interpretation: [/b>
• [b>A+ / A: High probability setup. Take these trades.
• [b>B+ / B [/b>: Decent setup. Trade if fits system rules.
• [b>C [/b>: Marginal setup. Only if very experienced.
• [b>D [/b>: Poor setup or no setup. Don't trade.
[b>Example Scenario: [/b>
ES futures:
• ORB: 5842-5850 (8 point range)
• Bullish breakout to 5851 confirmed
• Volume: 2.0x average (confirmed)
• VWAP: 5845 (price above VWAP ✓)
• Day type: Developing (too early, no bonus)
• Gap: None
[b>Setup: [/b>
• Action: LONG
• Entry: 5851
• Stop: 5846 (ORB mid, -5 point risk)
• T1: 5854 (+3 points, 1:0.6 R:R)
• T2: 5858 (+7 points, 1:1.4 R:R)
• T3: 5862.94 (+11.94 points, 1:2.4 R:R)
[b>Confidence: LONG with 55% confidence.
Interpretation: Solid setup, not perfect. Trade it if your system allows B-grade signals.
[b>📊 STATISTICS TRACKING & PERFORMANCE ANALYSIS [/b>
[b>Real-Time Performance Metrics: [/b>
ORB Fusion tracks comprehensive statistics over user-defined lookback (default 50 days):
[b>Breakout Performance: [/b>
• [b>Bull Breakouts: [/b> Total count, wins, losses, win rate
• [b>Bear Breakouts: [/b> Total count, wins, losses, win rate
[b>Win Definition: [/b> Breakout reaches ≥1.0x extension (doubles the opening range) before end of day.
[b>Example: [/b>
• ORB: 5842-5850 (8 points)
• Bull breakout at 5851
• Reaches 5858 (1.0x extension) by close
• Result: WIN
[b>Failed Breakout Performance: [/b>
• [b>Total Failed Breakouts [/b>: Count of breakouts that failed
• [b>Reversal Wins [/b>: Count where reversal trade reached target
• [b>Failed Reversal Win Rate [/b>: Wins / Total Failed
[b>Win Definition for Reversals: [/b>
• Failed bull → reversal short reaches ORB mid
• Failed bear → reversal long reaches ORB mid
[b>Extension Tracking: [/b>
• [b>Average Extension Reached [/b>: Mean of maximum extension achieved across all breakout days
• [b>Max Extension Overall [/b>: Largest extension ever achieved in lookback period
[b>Example: 🎨 THREE DISPLAY MODES
[b>Design Philosophy: [/b>
Not all traders need all features. Beginners want simplicity. Professionals want everything. ORB Fusion adapts.
[b>SIMPLE MODE: [/b>
[b>Shows: [/b>
• Primary ORB levels (High, Mid, Low)
• ORB box
• Breakout signals (triangles)
• Failed breakout signals (crosses)
• Basic dashboard (ORB status, breakout status, setup)
• VWAP
[b>Hides: [/b>
• Session ORBs (Asian, London, NY)
• IB levels and extensions
• ORB extensions beyond basic levels
• Gap analysis visuals
• Statistics dashboard
• Momentum candle coloring
• Narrative dashboard
[b>Use Case: [/b>
• Traders who want clean chart
• Focus on core ORB concept only
• Mobile trading (less screen space)
[b>STANDARD MODE:
[b>Shows Everything in Simple Plus: [/b>
• Session ORBs (Asian, London, NY)
• IB levels (high, low, mid)
• IB extensions
• ORB extensions (1.272x, 1.5x, 1.618x, 2.0x)
• Gap analysis and fill targets
• VWAP bands (1σ and 2σ)
• Momentum candle coloring
• Context section in dashboard
• Narrative dashboard
[b>Hides: [/b>
• Advanced extensions (2.618x, 3.0x)
• Detailed statistics dashboard
[b>Use Case: [/b>
• Most traders
• Balance between information and clarity
• Covers 90% of use cases
[b>ADVANCED MODE:
[b>Shows Everything:
• All session ORBs
• All IB levels and extensions
• All ORB extensions (including 2.618x and 3.0x)
• Full gap analysis
• VWAP with both 1σ and 2σ bands
• Momentum candle coloring
• Complete statistics dashboard
• Narrative dashboard
• All context metrics
[b>Use Case: [/b>
• Professional traders
• System developers
• Those who want maximum information density
[b>Switching Modes: [/b>
Single dropdown input: "Display Mode" → Simple / Standard / Advanced
Entire indicator adapts instantly. No need to toggle 20 individual settings.
📖 NARRATIVE DASHBOARD
[b>Innovation: Plain-English Market State [/b>
Most indicators show data. ORB Fusion explains what the data [b>means [/b>.
[b>Narrative Components: [/b>
[b>1. Phase: [/b>
• "📍 Building ORB..." (during ORB session)
• "📊 Trading Phase" (after ORB complete)
• "⏳ Pre-Market" (before ORB session)
[b>2. Status (Current Observation): [/b>
• "⚠️ Failed breakout - reversal likely"
• "🚀 Bullish momentum in play"
• "📉 Bearish momentum in play"
• "⚖️ Consolidating in range"
• "👀 Monitoring for setup"
[b>3. Next Level:
Tells you what to watch for:
• "🎯 1.5x @ 5854.00" (next extension target)
• "Watch ORB levels" (inside range, await breakout)
[b>4. Setup: [/b>
Current trade setup + grade:
• "LONG " (bullish breakout, A-grade)
• "🔥 SHORT REVERSAL " (failed bull breakout, A+-grade)
• "WAIT " (no setup)
[b>5. Reason: [/b>
Why this setup exists:
• "ORB Bullish Breakout"
• "Failed Bear Breakout - High Probability Reversal"
• "Range Fade - Near High"
[b>6. Tip (Market Insight):
Contextual advice:
• "🔥 TREND DAY - Trail stops" (day type is trending)
• "🔄 ROTATION - Fade extremes" (day type is rotating)
• "📊 Gap unfilled - magnet level" (gap creates target)
• "📈 Normal conditions" (no special context)
[b>Example Narrative:
```
📖 ORB Narrative
━━━━━━━━━━━━━━━━
Phase | 📊 Trading Phase
Status | 🚀 Bullish momentum in play
Next | 🎯 1.5x @ 5854.00
📈 Setup | LONG
Reason | ORB Bullish Breakout
💡 Tip | 🔥 TREND DAY - Trail stops
```
[b>Glance Interpretation: [/b>
"We're in trading phase. Bullish breakout happened (momentum in play). Next target is 1.5x extension at 5854. Current setup is LONG with A-grade. It's a trend day, so trail stops (don't take early profits)."
Complete market state communicated in 6 lines. No interpretation needed.
[b>Why This Matters:
Beginner traders struggle with "So what?" question. Indicators show lines and signals, but what does it mean [/b>? Narrative dashboard bridges this gap.
Professional traders benefit too—rapid context assessment during fast-moving markets. No time to analyze; glance at narrative, get action plan.
🔔 INTELLIGENT ALERT SYSTEM
[b>Four Alert Types: [/b>
[b>1. Breakout Alert: [/b>
[b>Trigger: [/b> ORB breakout confirmed (bull or bear)
[b>Message: [/b>
```
🚀 ORB BULLISH BREAKOUT
Price: 5851.00
Volume Confirmed
Grade: A
```
[b>Frequency: [/b> Once per bar (prevents spam)
[b>2. Failed Breakout Alert: [/b>
[b>Trigger: [/b> Breakout fails, reversal setup generated
[b>Message: [/b>
```
🔥 FAILED BULLISH BREAKOUT!
HIGH PROBABILITY SHORT REVERSAL
Entry: 5848.00
Stop: 5854.00
T1: 5846.00
T2: 5842.00
Historical Win Rate: 73%
```
[b>Why Comprehensive? [/b> Failed breakout alerts include complete trade plan. You can execute immediately from alert—no need to check chart.
[b>3. Extension Alert:
[b>Trigger: [/b> Price reaches extension level for first time
[b>Message: [/b>
```
🎯 Bull Extension 1.5x reached @ 5854.00
```
[b>Use: [/b> Profit-taking reminder. When extension hit, consider scaling out.
[b>4. IB Break Alert: [/b>
[b>Trigger: [/b> Price breaks above IB high or below IB low
[b>Message: [/b>
```
📊 IB HIGH BROKEN - Potential Trend Day
```
[b>Use: [/b> Day type classification. IB break suggests trend day developing—adjust strategy to trend-following mode.
[b>Alert Management: [/b>
Each alert type can be enabled/disabled independently. Prevents notification overload.
[b>Cooldown Logic: [/b>
Alerts won't fire if same alert type triggered within last bar. Prevents:
• "Breakout" alert every tick during choppy breakout
• Multiple "extension" alerts if price oscillates at level
Ensures: One clean alert per event.
⚙️ KEY PARAMETERS EXPLAINED
[b>Opening Range Settings: [/b>
• [b>ORB Timeframe [/b> (5/15/30/60 min): Duration of opening range window
- 30 min recommended for most traders
• [b>Use RTH Only [/b> (ON/OFF): Only trade during regular trading hours
- ON recommended (avoids thin overnight markets)
• [b>Use LTF Precision [/b> (ON/OFF): Sample 1-minute bars for accuracy
- ON recommended (critical for charts >1 minute)
• [b>Precision TF [/b> (1/5 min): Timeframe for LTF sampling
- 1 min recommended (most accurate)
[b>Session ORBs: [/b>
• [b>Show Asian/London/NY ORB [/b> (ON/OFF): Display multi-session ranges
- OFF in Simple mode
- ON in Standard/Advanced if trading 24hr markets
• [b>Session Windows [/b>: Time ranges for each session ORB
- Defaults align with major session opens
[b>Initial Balance: [/b>
• [b>Show IB [/b> (ON/OFF): Display Initial Balance levels
- ON recommended for day type classification
• [b>IB Session Window [/b> (0930-1030): First hour of trading
- Default is standard for US equities
• [b>Show IB Extensions [/b> (ON/OFF): Project IB extension targets
- ON recommended (identifies trend days)
• [b>IB Extensions 1-4 [/b> (0.5x, 1.0x, 1.5x, 2.0x): Extension multipliers
- Defaults are Market Profile standard
[b>ORB Extensions: [/b>
• [b>Show Extensions [/b> (ON/OFF): Project ORB extension targets
- ON recommended (defines profit targets)
• [b>Enable Individual Extensions [/b> (1.272x, 1.5x, 1.618x, 2.0x, 2.618x, 3.0x)
- Enable 1.272x, 1.5x, 1.618x, 2.0x minimum
- Disable 2.618x and 3.0x unless trading very volatile instruments
[b>Breakout Detection:
• [b>Confirmation Method [/b> (Close/Wick/Body):
- Close recommended (best balance)
- Wick for scalping
- Body for conservative
• [b>Require Volume Confirmation [/b> (ON/OFF):
- ON recommended (increases reliability)
• [b>Volume Multiplier [/b> (1.0-3.0):
- 1.5x recommended
- Lower for thin instruments
- Higher for heavy volume instruments
[b>Failed Breakout System: [/b>
• [b>Enable Failed Breakouts [/b> (ON/OFF):
- ON strongly recommended (highest edge)
• [b>Bars to Confirm Failure [/b> (2-10):
- 3 bars recommended
- 2 for aggressive (more signals, more false failures)
- 5+ for conservative (fewer signals, higher quality)
• [b>Failure Buffer [/b> (0.0-0.5 ATR):
- 0.1 ATR recommended
- Filters noise during consolidation near ORB level
• [b>Show Reversal Targets [/b> (ON/OFF):
- ON recommended (visualizes trade plan)
• [b>Reversal Target Mults [/b> (0.5x, 1.0x, 1.5x):
- Defaults are tested values
- Adjust based on average daily range
[b>Gap Analysis:
• [b>Show Gap Analysis [/b> (ON/OFF):
- ON if trading instruments that gap frequently
- OFF for 24hr markets (forex, crypto—no gaps)
• [b>Gap Fill Target [/b> (ON/OFF):
- ON to visualize previous close (gap fill level)
[b>VWAP:
• [b>Show VWAP [/b> (ON/OFF):
- ON recommended (key institutional level)
• [b>Show VWAP Bands [/b> (ON/OFF):
- ON in Standard/Advanced
- OFF in Simple
• [b>Band Multipliers (1.0σ, 2.0σ):
- Defaults are standard
- 1σ = normal range, 2σ = extreme
[b>Day Type: [/b>
• [b>Show Day Type Analysis [/b> (ON/OFF):
- ON recommended (critical for strategy adaptation)
• [b>Trend Day Threshold [/b> (1.0-2.5 IB mult):
- 1.5x recommended
- When price extends >1.5x IB, classifies as Trend Day
[b>Enhanced Visuals:
• [b>Show Momentum Candles [/b> (ON/OFF):
- ON for visual context
- OFF if chart gets too colorful
• [b>Show Gradient Zone Fills [/b> (ON/OFF):
- ON for professional look
- OFF for minimalist chart
• [b>Label Display Mode [/b> (All/Adaptive/Minimal):
- Adaptive recommended (shows nearby labels only)
- All for information density
- Minimal for clean chart
• [b>Label Proximity [/b> (1.0-5.0 ATR):
- 3.0 ATR recommended
- Labels beyond this distance are hidden (Adaptive mode)
[b>🎓 PROFESSIONAL USAGE PROTOCOL [/b>
[b>Phase 1: Learning the System (Week 1) [/b>
[b>Goal: [/b> Understand ORB concepts and dashboard interpretation
[b>Setup: [/b>
• Display Mode: STANDARD
• ORB Timeframe: 30 minutes
• Enable ALL features (IB, extensions, failed breakouts, VWAP, gap analysis)
• Enable statistics tracking
[b>Actions: [/b>
• Paper trade ONLY—no real money
• Observe ORB formation every day (9:30-10:00 AM ET for US markets)
• Note when ORB breakouts occur and if they extend
• Note when breakouts fail and reversals happen
• Watch day type classification evolve during session
• Track statistics—which setups are working?
[b>Key Learning: [/b>
• How often do breakouts reach 1.5x extension? (typically 50-60% of confirmed breakouts)
• How often do breakouts fail? (typically 30-40%)
• Which setup grade (A/B/C) actually performs best? (should see A-grade outperforming)
• What day type produces best results? (trend days favor breakouts, rotation days favor fades)
[b>Phase 2: Parameter Optimization (Week 2) [/b>
[b>Goal: [/b> Tune system to your instrument and timeframe
[b>ORB Timeframe Selection:
• Run 5 days with 15-minute ORB
• Run 5 days with 30-minute ORB
• Compare: Which captures better breakouts on your instrument?
• Typically: 30-minute optimal for most, 15-minute for very liquid (ES, SPY)
[b>Volume Confirmation Testing:
• Run 5 days WITH volume confirmation
• Run 5 days WITHOUT volume confirmation
• Compare: Does volume confirmation increase win rate?
• If win rate improves by >5%: Keep volume confirmation ON
• If no improvement: Turn OFF (avoid missing valid breakouts)
[b>Failed Breakout Bars:
[b>Goal: [/b> Develop personal trading rules based on system signals
[b>Setup Selection Rules: [/b>
Define which setups you'll trade:
• [b>Conservative: [/b> Only A+ and A grades
• [b>Balanced: [/b> A+, A, B+ grades
• [b>Aggressive: [/b> All grades B and above
Test each approach for 5-10 trades, compare results.
[b>Position Sizing by Grade: [/b>
Consider risk-weighting by setup quality:
• A+ grade: 100% position size
• A grade: 75% position size
• B+ grade: 50% position size
• B grade: 25% position size
Example: If max risk is $1000/trade:
• A+ setup: Risk $1000
• A setup: Risk $750
• B+ setup: Risk $500
This matches bet sizing to edge.
[b>Day Type Adaptation: [/b>
Create rules for different day types:
Trend Days:
• Take ALL breakout signals (A/B/C grades)
• Hold for 2.0x extension minimum
• Trail stops aggressively (1.0 ATR trail)
• DON'T fade—reversals unlikely
Rotation Days:
• ONLY take failed breakout reversals
• Ignore initial breakout signals (likely to fail)
• Take profits quickly (0.5x extension)
• Focus on fade setups (Fade High/Fade Low)
Normal Days:
• Take A/A+ breakout signals only
• Take ALL failed breakout reversals (high probability)
• Target 1.0-1.5x extensions
• Partial profit-taking at extensions
Time-of-Day Rules: [/b>
Breakouts at different times have different probabilities:
10:00-10:30 AM (Early Breakout):
• ORB just completed
• Fresh breakout
• Probability: Moderate (50-55% reach 1.0x)
• Strategy: Conservative position sizing
10:30-12:00 PM (Mid-Morning):
• Momentum established
• Volume still healthy
• Probability: High (60-65% reach 1.0x)
• Strategy: Standard position sizing
12:00-2:00 PM (Lunch Doldrums):
• Volume dries up
• Whipsaw risk increases
• Probability: Low (40-45% reach 1.0x)
• Strategy: Avoid new entries OR reduce size 50%
2:00-4:00 PM (Afternoon Session):
• Late-day positioning
• EOD squeezes possible
• Probability: Moderate-High (55-60%)
• Strategy: Watch for IB break—if trending all day, follow
[b>Phase 4: Live Micro-Sizing (Month 2) [/b>
[b>Goal: [/b> Validate paper trading results with minimal risk
[b>Setup: [/b>
• 10-20% of intended full position size
• Take ONLY A+ and A grade setups
• Follow stop loss and targets religiously
[b>Execution: [/b>
• Execute from alerts OR from dashboard setup box
• Entry: Close of signal bar OR next bar market order
• Stop: Use exact stop from setup (don't widen)
• Targets: Scale out at T1/T2/T3 as indicated
[b>Tracking: [/b>
• Log every trade: Entry, Exit, Grade, Outcome, Day Type
• Calculate: Win rate, Average R-multiple, Max consecutive losses
• Compare to paper trading results (should be within 15%)
[b>Red Flags: [/b>
• Win rate <45%: System not suitable for this instrument/timeframe
• Major divergence from paper trading: Execution issues (slippage, late entries, emotional exits)
• Max consecutive losses >8: Hitting rough patch OR market regime changed
[b>Phase 5: Scaling Up (Months 3-6)
[b>Goal: [/b> Gradually increase to full position size
[b>Progression: [/b>
• Month 3: 25-40% size (if micro-sizing profitable)
• Month 4: 40-60% size
• Month 5: 60-80% size
• Month 6: 80-100% size
[b>Milestones Required to Scale Up: [/b>
• Minimum 30 trades at current size
• Win rate ≥48%
• Profit factor ≥1.2
• Max drawdown <20%
• Emotional control (no revenge trading, no FOMO)
[b>Advanced Techniques:
[b>Multi-Timeframe ORB: Assumes first 30-60 minutes establish value. Violation: Market opens after major news, price discovery continues for hours (opening range meaningless).
2. [b>Volume Indicates Conviction: ES, NQ, RTY, SPY, QQQ—high liquidity, clean ORB formation, reliable extensions
• [b>Large-Cap Stocks: AAPL, MSFT, TSLA, NVDA (>$5B market cap, >5M daily volume)
• [b>Liquid Futures: CL (crude oil), GC (gold), 6E (EUR/USD), ZB (bonds)—24hr markets benefit from session ORBs
• [b>Major Forex Pairs: [/b> EUR/USD, GBP/USD, USD/JPY—London/NY session ORBs work well
[b>Performs Poorly On: [/b>
• [b>Illiquid Stocks: <$1M daily volume, wide spreads, gappy price action
• [b>Penny Stocks: [/b> Manipulated, pump-and-dump, no real price discovery
• [b>Low-Volume ETFs: Exotic sector ETFs, leveraged products with thin volume
• [b>Crypto on Sketchy Exchanges: Wash trading, spoofing invalidates volume analysis
• [b>Earnings Days: [/b> ORB completes before earnings release, then completely resets (useless)
• Binary Event Days: FDA approvals, court rulings—discontinuous price action
[b>Known Weaknesses: [/b>
• [b>Slow Starts: ORB doesn't complete until 10:00 AM (30-min ORB). Early morning traders have no signals for 30 minutes. Consider using 15-minute ORB if this is problematic.
• [b>Failure Detection Lag: [/b> Failed breakout requires 3+ bars to confirm. By the time system signals reversal, price may have already moved significantly back inside range. Manual traders watching in real-time can enter earlier.
• [b>Extension Overshoot: [/b> System projects extensions mathematically (1.5x, 2.0x, etc.). Actual moves may stop short (1.3x) or overshoot (2.2x). Extensions are targets, not magnets.
• [b>Day Type Misclassification: [/b> Early in session, day type is "Developing." By the time it's classified definitively (often 11:00 AM+), half the day is over. Strategy adjustments happen late.
• [b>Gap Assumptions: [/b> System assumes gaps want to fill. Strong trend days never fill gaps (gap becomes support/resistance forever). Blindly trading toward gaps can backfire on trend days.
• [b>Volume Data Quality: Forex doesn't have centralized volume (uses tick volume as proxy—less reliable). Crypto volume is often fake (wash trading). Volume confirmation less effective on these instruments.
• [b>Multi-Session Complexity: [/b> When using Asian/London/NY ORBs simultaneously, chart becomes cluttered. Requires discipline to focus on relevant session for current time.
[b>Risk Factors: [/b>
• [b>Opening Gaps: Large gaps (>2%) can create distorted ORBs. Opening range might be unusually wide or narrow, making extensions unreliable.
• [b>Low Volatility Environments:[/b> When VIX <12, opening ranges can be tiny (0.2-0.3%). Extensions are equally tiny. Profit targets don't justify commission/slippage.
• [b>High Volatility Environments:[/b> When VIX >30, opening ranges are huge (2-3%+). Extensions project unrealistic targets. Failed breakouts happen faster (volatility whipsaw).
• [b>Algorithm Dominance:[/b> In heavily algorithmic markets (ES during overnight session), ORB levels can be manipulated—algos pin price to ORB high/low intentionally. Breakouts become stop-runs rather than genuine directional moves.
[b>⚠️ RISK DISCLOSURE[/b>
Trading futures, stocks, options, forex, and cryptocurrencies involves substantial risk of loss and is not suitable for all investors. Opening Range Breakout strategies, while based on sound market structure principles, do not guarantee profits and can result in significant losses.
The ORB Fusion indicator implements professional trading concepts including Opening Range theory, Market Profile Initial Balance analysis, Fibonacci extensions, and failed breakout reversal logic. These methodologies have theoretical foundations but past performance—whether backtested or live—is not indicative of future results.
Opening Range theory assumes the first 30-60 minutes of trading establish a meaningful value area and that breakouts from this range signal directional conviction. This assumption may not hold during:
• Major news events (FOMC, NFP, earnings surprises)
• Market structure changes (circuit breakers, trading halts)
• Low liquidity periods (holidays, early closures)
• Algorithmic manipulation or spoofing
Failed breakout detection relies on patterns of trapped participant behavior. While historically these patterns have shown statistical edges, market conditions change. Institutional algorithms, changing market structure, or regime shifts can reduce or eliminate edges that existed historically.
Initial Balance classification (trend day vs rotation day vs normal day) is a heuristic framework, not a deterministic prediction. Day type can change mid-session. Early classification may prove incorrect as the day develops.
Extension projections (1.272x, 1.5x, 1.618x, 2.0x, etc.) are probabilistic targets derived from Fibonacci ratios and empirical market behavior. They are not "support and resistance levels" that price must reach or respect. Markets can stop short of extensions, overshoot them, or ignore them entirely.
Volume confirmation assumes high volume indicates institutional participation and conviction. In algorithmic markets, volume can be artificially high (HFT activity) or artificially low (dark pools, internalization). Volume is a proxy, not a guarantee of conviction.
LTF precision sampling improves ORB accuracy by using 1-minute bars but introduces additional data dependencies. If 1-minute data is unavailable, inaccurate, or delayed, ORB calculations will be incorrect.
The grading system (A+/A/B+/B/C/D) and confidence scores aggregate multiple factors (volume, VWAP, day type, IB expansion, gap context) into a single assessment. This is a mechanical calculation, not artificial intelligence. The system cannot adapt to unprecedented market conditions or events outside its programmed logic.
Real trading involves slippage, commissions, latency, partial fills, and rejected orders not present in indicator calculations. ORB Fusion generates signals at bar close; actual fills occur with delay. Opening range forms during highest volatility (first 30 minutes)—spreads widen, slippage increases. Execution quality significantly impacts realized results.
Statistics tracking (win rates, extension levels reached, day type distribution) is based on historical bars in your lookback window. If lookback is small (<50 bars) or market regime changed, statistics may not represent future probabilities.
Users must independently validate system performance on their specific instruments, timeframes, and broker execution environment. Paper trade extensively (100+ trades minimum) before risking capital. Start with micro position sizing (5-10% of intended size) for 50+ trades to validate execution quality matches expectations.
Never risk more than you can afford to lose completely. Use proper position sizing (0.5-2% risk per trade maximum). Implement stop losses on every single trade without exception. Understand that most retail traders lose money—sophisticated indicators do not change this fundamental reality. They systematize analysis but cannot eliminate risk.
The developer makes no warranties regarding profitability, suitability, accuracy, reliability, or fitness for any purpose. Users assume full responsibility for all trading decisions, parameter selections, risk management, and outcomes.
By using this indicator, you acknowledge that you have read, understood, and accepted these risk disclosures and limitations, and you accept full responsibility for all trading activity and potential losses.
[b>═══════════════════════════════════════════════════════════════════════════════[/b>
[b>CLOSING STATEMENT[/b>
[b>═══════════════════════════════════════════════════════════════════════════════[/b>
Opening Range Breakout is not a trick. It's a framework. The first 30-60 minutes reveal where participants believe value lies. Breakouts signal directional conviction. Failures signal trapped participants. Extensions define profit targets. Day types dictate strategy. Failed breakouts create the highest-probability reversals.
ORB Fusion doesn't predict the future—it identifies [b>structure[/b>, detects [b>breakouts[/b>, recognizes [b>failures[/b>, and generates [b>probabilistic trade plans[/b> with defined risk and reward.
The edge is not in the opening range itself. The edge is in recognizing when the market respects structure (follow breakouts) versus when it violates structure (fade breakouts). The edge is in detecting failures faster than discretionary traders. The edge is in systematic classification that prevents catastrophic errors—like fading a trend day or holding through rotation.
Most indicators draw lines. ORB Fusion implements a complete institutional trading methodology: Opening Range theory, Market Profile classification, failed breakout intelligence, Fibonacci projections, volume confirmation, gap psychology, and real-time performance tracking.
Whether you're a beginner learning market structure or a professional seeking systematic ORB implementation, this system provides the framework.
"The market's first word is its opening range. Everything after is commentary." — ORB Fusion
[Excalibur] Ehlers AutoCorrelation Periodogram ModifiedKeep your coins folks, I don't need them, don't want them. If you wish be generous, I do hope that charitable peoples worldwide with surplus food stocks may consider stocking local food banks before stuffing monetary bank vaults, for the crusade of remedying the needs of less than fortunate children, parents, elderly, homeless veterans, and everyone else who deserves nutritional sustenance for the soul.
DEDICATION:
This script is dedicated to the memory of Nikolai Dmitriyevich Kondratiev (Никола́й Дми́триевич Кондра́тьев) as tribute for being a pioneering economist and statistician, paving the way for modern econometrics by advocation of rigorous and empirical methodologies. One of his most substantial contributions to the study of business cycle theory include a revolutionary hypothesis recognizing the existence of dynamic cycle-like phenomenon inherent to economies that are characterized by distinct phases of expansion, stagnation, recession and recovery, what we now know as "Kondratiev Waves" (K-waves). Kondratiev was one of the first economists to recognize the vital significance of applying quantitative analysis on empirical data to evaluate economic dynamics by means of statistical methods. His understanding was that conceptual models alone were insufficient to adequately interpret real-world economic conditions, and that sophisticated analysis was necessary to better comprehend the nature of trending/cycling economic behaviors. Additionally, he recognized prosperous economic cycles were predominantly driven by a combination of technological innovations and infrastructure investments that resulted in profound implications for economic growth and development.
I will mention this... nation's economies MUST be supported and defended to continuously evolve incrementally in order to flourish in perpetuity OR suffer through eras with lasting ramifications of societal stagnation and implosion.
Analogous to the realm of economics, aperiodic cycles/frequencies, both enduring and ephemeral, do exist in all facets of life, every second of every day. To name a few that any blind man can naturally see are: heartbeat (cardiac cycles), respiration rates, circadian rhythms of sleep, powerful magnetic solar cycles, seasonal cycles, lunar cycles, weather patterns, vegetative growth cycles, and ocean waves. Do not pretend for one second that these basic aforementioned examples do not affect business cycle fluctuations in minuscule and monumental ways hour to hour, day to day, season to season, year to year, and decade to decade in every nation on the planet. Kondratiev's original seminal theories in macroeconomics from nearly a century ago have proven remarkably prescient with many of his antiquated elementary observations/notions/hypotheses in macroeconomics being scholastically studied and topically researched further. Therefore, I am compelled to honor and recognize his statistical insight and foresight.
If only.. Kondratiev could hold a pocket sized computer in the cup of both hands bearing the TradingView logo and platform services, I truly believe he would be amazed in marvelous delight with a GARGANTUAN smile on his face.
INTRODUCTION:
Firstly, this is NOT technically speaking an indicator like most others. I would describe it as an advanced cycle period detector to obtain market data spectral estimates with low latency and moderate frequency resolution. Developers can take advantage of this detector by creating scripts that utilize a "Dominant Cycle Source" input to adaptively govern algorithms. Be forewarned, I would only recommend this for advanced developers, not novice code dabbling. Although, there is some Pine wizardry introduced here for novice Pine enthusiasts to witness and learn from. AI did describe the code into one super-crunched sentence as, "a rare feat of exceptionally formatted code masterfully balancing visual clarity, precision, and complexity to provide immense educational value for both programming newcomers and expert Pine coders alike."
Understand all of the above aforementioned? Buckle up and proceed for a lengthy read of verbose complexity...
This is my enhanced and heavily modified version of autocorrelation periodogram (ACP) for Pine Script v5.0. It was originally devised by the mathemagician John Ehlers for detecting dominant cycles (frequencies) in an asset's price action. I have been sitting on code similar to this for a long time, but I decided to unleash the advanced code with my fashion. Originally Ehlers released this with multiple versions, one in a 2016 TASC article and the other in his last published 2013 book "Cycle Analytics for Traders", chapter 8. He wasn't joking about "concepts of advanced technical trading" and ACP is nowhere near to his most intimidating and ingenious calculations in code. I will say the book goes into many finer details about the original periodogram, so if you wish to delve into even more elaborate info regarding Ehlers' original ACP form AND how you may adapt algorithms, you'll have to obtain one. Note to reader, comparing Ehlers' original code to my chimeric code embracing the "Power of Pine", you will notice they have little resemblance.
What you see is a new species of autocorrelation periodogram combining Ehlers' innovation with my fascinations of what ACP could be in a Pine package. One other intention of this script's code is to pay homage to Ehlers' lifelong works. Like Kondratiev, Ehlers is also a hardcore cycle enthusiast. I intend to carry on the fire Ehlers envisioned and I believe that is literally displayed here as a pleasant "fiery" example endowed with Pine. With that said, I tried to make the code as computationally efficient as possible, without going into dozens of more crazy lines of code to speed things up even more. There's also a few creative modifications I made by making alterations to the originating formulas that I felt were improvements, one of them being lag reduction. By recently questioning every single thing I thought I knew about ACP, combined with the accumulation of my current knowledge base, this is the innovative revision I came up with. I could have improved it more but decided not to mind thrash too many TV members, maybe later...
I am now confident Pine should have adequate overhead left over to attach various indicators to the dominant cycle via input.source(). TV, I apologize in advance if in the future a server cluster combusts into a raging inferno... Coders, be fully prepared to build entire algorithms from pure raw code, because not all of the built-in Pine functions fully support dynamic periods (e.g. length=ANYTHING). Many of them do, as this was requested and granted a while ago, but some functions are just inherently finicky due to implementation combinations and MUST be emulated via raw code. I would imagine some comprehensive library or numerous authored scripts have portions of raw code for Pine built-ins some where on TV if you look diligently enough.
Notice: Unfortunately, I will not provide any integration support into member's projects at all. I have my own projects that require way too much of my day already. While I was refactoring my life (forgoing many other "important" endeavors) in the early half of 2023, I primarily focused on this code over and over in my surplus time. During that same time I was working on other innovations that are far above and beyond what this code is. I hope you understand.
The best way programmatically may be to incorporate this code into your private Pine project directly, after brutal testing of course, but that may be too challenging for many in early development. Being able to see the periodogram is also beneficial, so input sourcing may be the "better" avenue to tether portions of the dominant cycle to algorithms. Unique indication being able to utilize the dominantCycle may be advantageous when tethering this script to those algorithms. The easiest way is to manually set your indicators to what ACP recognizes as the dominant cycle, but that's actually not considered dynamic real time adaption of an indicator. Different indicators may need a proportion of the dominantCycle, say half it's value, while others may need the full value of it. That's up to you to figure that out in practice. Sourcing one or more custom indicators dynamically to one detector's dominantCycle may require code like this: `int sourceDC = int(math.max(6, math.min(49, input.source(close, "Dominant Cycle Source"))))`. Keep in mind, some algos can use a float, while algos with a for loop require an integer.
I have witnessed a few attempts by talented TV members for a Pine based autocorrelation periodogram, but not in this caliber. Trust me, coding ACP is no ordinary task to accomplish in Pine and modifying it blessed with applicable improvements is even more challenging. For over 4 years, I have been slowly improving this code here and there randomly. It is beautiful just like a real flame, but... this one can still burn you! My mind was fried to charcoal black a few times wrestling with it in the distant past. My very first attempt at translating ACP was a month long endeavor because PSv3 simply didn't have arrays back then. Anyways, this is ACP with a newer engine, I hope you enjoy it. Any TV subscriber can utilize this code as they please. If you are capable of sufficiently using it properly, please use it wisely with intended good will. That is all I beg of you.
Lastly, you now see how I have rasterized my Pine with Ehlers' swami-like tech. Yep, this whole time I have been using hline() since PSv3, not plot(). Evidently, plot() still has a deficiency limited to only 32 plots when it comes to creating intense eye candy indicators, the last I checked. The use of hline() is the optimal choice for rasterizing Ehlers styled heatmaps. This does only contain two color schemes of the many I have formerly created, but that's all that is essentially needed for this gizmo. Anything else is generally for a spectacle or seeing how brutal Pine can be color treated. The real hurdle is being able to manipulate colors dynamically with Merlin like capabilities from multiple algo results. That's the true challenging part of these heatmap contraptions to obtain multi-colored "predator vision" level indication. You now have basic hline() food for thought empowerment to wield as you can imaginatively dream in Pine projects.
PERIODOGRAM UTILITY IN REAL WORLD SCENARIOS:
This code is a testament to the abilities that have yet to be fully realized with indication advancements. Periodograms, spectrograms, and heatmaps are a powerful tool with real-world applications in various fields such as financial markets, electrical engineering, astronomy, seismology, and neuro/medical applications. For instance, among these diverse fields, it may help traders and investors identify market cycles/periodicities in financial markets, support engineers in optimizing electrical or acoustic systems, aid astronomers in understanding celestial object attributes, assist seismologists with predicting earthquake risks, help medical researchers with neurological disorder identification, and detection of asymptomatic cardiovascular clotting in the vaxxed via full body thermography. In either field of study, technologies in likeness to periodograms may very well provide us with a better sliver of analysis beyond what was ever formerly invented. Periodograms can identify dominant cycles and frequency components in data, which may provide valuable insights and possibly provide better-informed decisions. By utilizing periodograms within aspects of market analytics, individuals and organizations can potentially refrain from making blinded decisions and leverage data-driven insights instead.
PERIODOGRAM INTERPRETATION:
The periodogram renders the power spectrum of a signal, with the y-axis representing the periodicity (frequencies/wavelengths) and the x-axis representing time. The y-axis is divided into periods, with each elevation representing a period. In this periodogram, the y-axis ranges from 6 at the very bottom to 49 at the top, with intermediate values in between, all indicating the power of the corresponding frequency component by color. The higher the position occurs on the y-axis, the longer the period or lower the frequency. The x-axis of the periodogram represents time and is divided into equal intervals, with each vertical column on the axis corresponding to the time interval when the signal was measured. The most recent values/colors are on the right side.
The intensity of the colors on the periodogram indicate the power level of the corresponding frequency or period. The fire color scheme is distinctly like the heat intensity from any casual flame witnessed in a small fire from a lighter, match, or camp fire. The most intense power would be indicated by the brightest of yellow, while the lowest power would be indicated by the darkest shade of red or just black. By analyzing the pattern of colors across different periods, one may gain insights into the dominant frequency components of the signal and visually identify recurring cycles/patterns of periodicity.
SETTINGS CONFIGURATIONS BRIEFLY EXPLAINED:
Source Options: These settings allow you to choose the data source for the analysis. Using the `Source` selection, you may tether to additional data streams (e.g. close, hlcc4, hl2), which also may include samples from any other indicator. For example, this could be my "Chirped Sine Wave Generator" script found in my member profile. By using the `SineWave` selection, you may analyze a theoretical sinusoidal wave with a user-defined period, something already incorporated into the code. The `SineWave` will be displayed over top of the periodogram.
Roofing Filter Options: These inputs control the range of the passband for ACP to analyze. Ehlers had two versions of his highpass filters for his releases, so I included an option for you to see the obvious difference when performing a comparison of both. You may choose between 1st and 2nd order high-pass filters.
Spectral Controls: These settings control the core functionality of the spectral analysis results. You can adjust the autocorrelation lag, adjust the level of smoothing for Fourier coefficients, and control the contrast/behavior of the heatmap displaying the power spectra. I provided two color schemes by checking or unchecking a checkbox.
Dominant Cycle Options: These settings allow you to customize the various types of dominant cycle values. You can choose between floating-point and integer values, and select the rounding method used to derive the final dominantCycle values. Also, you may control the level of smoothing applied to the dominant cycle values.
DOMINANT CYCLE VALUE SELECTIONS:
External to the acs() function, the code takes a dominant cycle value returned from acs() and changes its numeric form based on a specified type and form chosen within the indicator settings. The dominant cycle value can be represented as an integer or a decimal number, depending on the attached algorithm's requirements. For example, FIR filters will require an integer while many IIR filters can use a float. The float forms can be either rounded, smoothed, or floored. If the resulting value is desired to be an integer, it can be rounded up/down or just be in an integer form, depending on how your algorithm may utilize it.
AUTOCORRELATION SPECTRUM FUNCTION BASICALLY EXPLAINED:
In the beginning of the acs() code, the population of caches for precalculated angular frequency factors and smoothing coefficients occur. By precalculating these factors/coefs only once and then storing them in an array, the indicator can save time and computational resources when performing subsequent calculations that require them later.
In the following code block, the "Calculate AutoCorrelations" is calculated for each period within the passband width. The calculation involves numerous summations of values extracted from the roofing filter. Finally, a correlation values array is populated with the resulting values, which are normalized correlation coefficients.
Moving on to the next block of code, labeled "Decompose Fourier Components", Fourier decomposition is performed on the autocorrelation coefficients. It iterates this time through the applicable period range of 6 to 49, calculating the real and imaginary parts of the Fourier components. Frequencies 6 to 49 are the primary focus of interest for this periodogram. Using the precalculated angular frequency factors, the resulting real and imaginary parts are then utilized to calculate the spectral Fourier components, which are stored in an array for later use.
The next section of code smooths the noise ridden Fourier components between the periods of 6 and 49 with a selected filter. This species also employs numerous SuperSmoothers to condition noisy Fourier components. One of the big differences is Ehlers' versions used basic EMAs in this section of code. I decided to add SuperSmoothers.
The final sections of the acs() code determines the peak power component for normalization and then computes the dominant cycle period from the smoothed Fourier components. It first identifies a single spectral component with the highest power value and then assigns it as the peak power. Next, it normalizes the spectral components using the peak power value as a denominator. It then calculates the average dominant cycle period from the normalized spectral components using Ehlers' "Center of Gravity" calculation. Finally, the function returns the dominant cycle period along with the normalized spectral components for later external use to plot the periodogram.
POST SCRIPT:
Concluding, I have to acknowledge a newly found analyst for assistance that I couldn't receive from anywhere else. For one, Claude doesn't know much about Pine, is unfortunately color blind, and can't even see the Pine reference, but it was able to intuitively shred my code with laser precise realizations. Not only that, formulating and reformulating my description needed crucial finesse applied to it, and I couldn't have provided what you have read here without that artificial insight. Finding the right order of words to convey the complexity of ACP and the elaborate accompanying content was a daunting task. No code in my life has ever absorbed so much time and hard fricking work, than what you witness here, an ACP gem cut pristinely. I'm unveiling my version of ACP for an empowering cause, in the hopes a future global army of code wielders will tether it to highly functional computational contraptions they might possess. Here is ACP fully blessed poetically with the "Power of Pine" in sublime code. ENJOY!
[delta2win] ShockSentinel Early Warnings🚀 ShockSentinel Early Warnings — Advanced Multi-Symbol Shock Detection System
📊 UNIQUE METHODOLOGY:
This indicator implements a proprietary concordance-based shock detection system that goes beyond simple price movement analysis. Unlike basic pump/dump detectors, it uses a sophisticated multi-symbol correlation algorithm to validate signals across multiple assets simultaneously, significantly reducing false positives while maintaining sensitivity to genuine market shocks.
🔬 TECHNICAL APPROACH:
• Adaptive Threshold System: Automatically adjusts detection sensitivity based on timeframe using proprietary scaling algorithms:
- 1m: 0.5% threshold (ultra-sensitive for scalping)
- 3m: 1.0% threshold (high-frequency trading)
- 5m: 2.0% threshold (short-term momentum)
- 15m: 3.0% threshold (intraday swings)
- 1h: 6.0% threshold (daily moves)
- 4h+: 10.0% threshold (swing trading)
• Dual Detection Modes:
- Percent Mode: Calculates maximum percentage change within configurable lookback window (1-6 bars) using the formula: max(|(close - close ) / close * 100|) for i = 1 to window
- ATR-Normalized Mode: Uses Average True Range for volatility-adjusted detection across different market regimes: max(|close - close | / ATR) for i = 1 to window
• Concordance Algorithm: Proprietary multi-symbol validation system that requires minimum correlation count across up to 4 additional symbols, ensuring signals are validated by market-wide participation rather than isolated price movements
• Non-Repainting Architecture: Optional bar-close confirmation prevents false signals from intraday noise while maintaining real-time alert capability for immediate response
🎯 MATHEMATICAL FOUNDATION:
The core algorithm implements a sliding window maximum change detection:
Percent Change Calculation:
For each bar, the system calculates the maximum absolute percentage change over the specified window:
- PctChange = (close - close ) / close * 100
- MaxPct = max(|PctChange |) for i = 1 to window
- Signal triggers when MaxPct >= threshold
ATR-Normalized Calculation:
For volatility-adjusted detection:
- ATRChange = (close - close ) / ATR
- MaxATR = max(|ATRChange |) for i = 1 to window
- Signal triggers when MaxATR >= ATR_multiplier
Concordance Validation:
- Requires minimum N symbols showing same directional movement
- Validates signal strength through market participation
- Reduces false signals from isolated price movements
- Improves signal quality through correlation analysis
⚙️ ADVANCED FEATURES:
• Preset System: 7 pre-configured strategies with optimized parameters:
- Scalp (Ultra-Fast): 0.6x scaling, 2-bar window, real-time alerts
- Aggressive: 0.7x scaling, 2-bar window, real-time alerts
- Balanced: 1.0x scaling, 3-bar window, confirmed signals
- Conservative: 1.3x scaling, 4-bar window, confirmed signals
- Volatility-Adaptive: ATR mode, 7-period ATR, 2.5x multiplier
- Momentum (Intraday): ATR mode, 10-period ATR, 2.0x multiplier
- Swing (Slow): ATR mode, 14-period ATR, 2.8x multiplier
• Real-time vs Confirmed: Choose between immediate alerts or bar-close confirmation
• Visual Analytics: Integrated signal history table with concordance gauges and performance metrics
• Professional Alerts: Multi-format alert system (Compact, Extended, Plain, CSV) with Telegram integration and customizable messaging
💡 UNIQUE VALUE PROPOSITION:
Unlike simple price change detectors, this system provides:
1. Multi-Symbol Validation: Validates signals across multiple correlated assets, ensuring market-wide participation
2. Adaptive Thresholds: Automatically adjusts sensitivity based on timeframe and market conditions
3. Dual Signal Types: Provides both real-time and confirmed signal options for different trading styles
4. Comprehensive Analytics: Includes signal history, concordance gauges, and performance tracking
5. Advanced Concordance: Uses sophisticated correlation algorithms for signal validation
6. Professional Integration: Built-in Telegram support with customizable message formats
🔧 USAGE INSTRUCTIONS:
1. Select Preset: Choose appropriate strategy for your trading style and timeframe
2. Configure Symbols: Add up to 4 additional symbols for concordance validation
3. Set Concordance: Adjust minimum count (higher = more selective, lower = more sensitive)
4. Choose Mode: Select between real-time or confirmed signals based on your risk tolerance
5. Enable Alerts: Configure notification preferences and message formats
6. Monitor Performance: Use integrated tables to track signal quality and concordance
📈 PERFORMANCE CHARACTERISTICS:
• Optimized for Crypto: Designed specifically for high-volatility cryptocurrency markets
• Multi-Timeframe: Effective across all timeframes from 1-minute to 4-hour charts
• False Signal Reduction: Multi-symbol validation significantly reduces false positives
• Flexible Sensitivity: Adjustable thresholds allow customization for different market conditions
• Real-time Capability: Provides immediate alerts for fast-moving markets
• Confirmation Option: Bar-close confirmation for conservative trading approaches
⚠️ TECHNICAL CONSIDERATIONS:
• Real-time Mode: May generate multiple alerts per bar; use cooldown settings to manage frequency
• Data Dependencies: Concordance requires data availability for all configured symbols
• Market Regimes: ATR mode provides better performance in varying volatility conditions
• Signal Quality: Higher concordance requirements reduce false signals but may miss opportunities
• Latency: request.security calls depend on data provider latency and availability
🎯 TARGET MARKETS:
• Cryptocurrency Trading: High-volatility crypto markets with frequent shock events
• Scalping: Short-term trading strategies requiring immediate signal detection
• Swing Trading: Medium-term strategies benefiting from confirmed signals
• Portfolio Management: Multi-asset correlation analysis for risk management
• Algorithmic Trading: Systematic strategies requiring reliable signal validation
📊 SIGNAL INTERPRETATION:
• Green Arrows (Pump): Upward price shock with sufficient concordance
• Red Arrows (Dump): Downward price shock with sufficient concordance
• Large Markers: Confirmed signals with high concordance
• Small Markers: Early signals with lower concordance
• Background Colors: Visual intensity based on concordance strength
• Tables: Historical signal tracking with performance metrics
Goertzel Cycle Composite Wave [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Cycle Composite Wave indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
*** To decrease the load time of this indicator, only XX many bars back will render to the chart. You can control this value with the setting "Number of Bars to Render". This doesn't have anything to do with repainting or the indicator being endpointed***
█ Brief Overview of the Goertzel Cycle Composite Wave
The Goertzel Cycle Composite Wave is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The Goertzel Cycle Composite Wave is considered a non-repainting and endpointed indicator. This means that once a value has been calculated for a specific bar, that value will not change in subsequent bars, and the indicator is designed to have a clear start and end point. This is an important characteristic for indicators used in technical analysis, as it allows traders to make informed decisions based on historical data without the risk of hindsight bias or future changes in the indicator's values. This means traders can use this indicator trading purposes.
The repainting version of this indicator with forecasting, cycle selection/elimination options, and data output table can be found here:
Goertzel Browser
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the cycles. The color of the lines indicates whether the wave is increasing or decreasing.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast: These inputs define the window size for the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Cycle Composite Wave Code
The Goertzel Cycle Composite Wave code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Cycle Composite Wave function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past sizes (WindowSizePast), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Cycle Composite Wave algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Cycle Composite Wave code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Cycle Composite Wave code calculates the waveform of the significant cycles for specified time windows. The windows are defined by the WindowSizePast parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in a matrix:
The calculated waveforms for the cycle is stored in the matrix - goeWorkPast. This matrix holds the waveforms for the specified time windows. Each row in the matrix represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Cycle Composite Wave function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Cycle Composite Wave code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Cycle Composite Wave's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for specified time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast:
The WindowSizePast is updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
The matrix goeWorkPast is initialized to store the Goertzel results for specified time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for waveforms:
The goertzel array is initialized to store the endpoint Goertzel.
Calculating composite waveform (goertzel array):
The composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Drawing composite waveform (pvlines):
The composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms and visualizes them on the chart using colored lines.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
Limited applicability:
The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Cycle Composite Wave indicator can be interpreted by analyzing the plotted lines. The indicator plots two lines: composite waves. The composite wave represents the composite wave of the price data.
The composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend.
Interpreting the Goertzel Cycle Composite Wave indicator involves identifying the trend of the composite wave lines and matching them with the corresponding bullish or bearish color.
█ Conclusion
The Goertzel Cycle Composite Wave indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Cycle Composite Wave indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Cycle Composite Wave indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
1. The first term represents the deviation of the data from the trend.
2. The second term represents the smoothness of the trend.
3. λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Goertzel Browser [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Browser indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
█ Brief Overview of the Goertzel Browser
The Goertzel Browser is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
3. Project the composite wave into the future, providing a potential roadmap for upcoming price movements.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the past and dotted lines for the future projections. The color of the lines indicates whether the wave is increasing or decreasing.
5. Displaying cycle information: The indicator provides a table that displays detailed information about the detected cycles, including their rank, period, Bartel's test results, amplitude, and phase.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements and their potential future trajectory, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast and WindowSizeFuture: These inputs define the window size for past and future projections of the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
UseCycleList: This boolean input determines whether a user-defined list of cycles should be used for constructing the composite wave. If set to false, the top N cycles will be used.
Cycle1, Cycle2, Cycle3, Cycle4, and Cycle5: These inputs define the user-defined list of cycles when 'UseCycleList' is set to true. If using a user-defined list, each of these inputs represents the period of a specific cycle to include in the composite wave.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Browser Code
The Goertzel Browser code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Browser function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past and future window sizes (WindowSizePast, WindowSizeFuture), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, goeWorkFuture, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Browser algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Browser code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Browser code calculates the waveform of the significant cycles for both past and future time windows. The past and future windows are defined by the WindowSizePast and WindowSizeFuture parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in matrices:
The calculated waveforms for each cycle are stored in two matrices - goeWorkPast and goeWorkFuture. These matrices hold the waveforms for the past and future time windows, respectively. Each row in the matrices represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Browser function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Browser code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Browser's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for both past and future time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast and WindowSizeFuture:
The WindowSizePast and WindowSizeFuture are updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
Two matrices, goeWorkPast and goeWorkFuture, are initialized to store the Goertzel results for past and future time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for past and future waveforms:
Three arrays, epgoertzel, goertzel, and goertzelFuture, are initialized to store the endpoint Goertzel, non-endpoint Goertzel, and future Goertzel projections, respectively.
Calculating composite waveform for past bars (goertzel array):
The past composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Calculating composite waveform for future bars (goertzelFuture array):
The future composite waveform is calculated in a similar way as the past composite waveform.
Drawing past composite waveform (pvlines):
The past composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
Drawing future composite waveform (fvlines):
The future composite waveform is drawn on the chart using dotted lines. The color of the lines is determined by the direction of the waveform (fuchsia for upward, yellow for downward).
Displaying cycle information in a table (table3):
A table is created to display the cycle information, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
Filling the table with cycle information:
The indicator iterates through the detected cycles and retrieves the relevant information (period, amplitude, phase, and Bartel value) from the corresponding arrays. It then fills the table with this information, displaying the values up to six decimal places.
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms for both past and future time windows and visualizes them on the chart using colored lines. Additionally, it displays detailed cycle information in a table, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles and potential future impact. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
No guarantee of future performance: While the script can provide insights into past cycles and potential future trends, it is important to remember that past performance does not guarantee future results. Market conditions can change, and relying solely on the script's predictions without considering other factors may lead to poor trading decisions.
Limited applicability: The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Browser indicator can be interpreted by analyzing the plotted lines and the table presented alongside them. The indicator plots two lines: past and future composite waves. The past composite wave represents the composite wave of the past price data, and the future composite wave represents the projected composite wave for the next period.
The past composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend. On the other hand, the future composite wave line is a dotted line with fuchsia indicating a bullish trend and yellow indicating a bearish trend.
The table presented alongside the indicator shows the top cycles with their corresponding rank, period, Bartels, amplitude or cycle strength, and phase. The amplitude is a measure of the strength of the cycle, while the phase is the position of the cycle within the data series.
Interpreting the Goertzel Browser indicator involves identifying the trend of the past and future composite wave lines and matching them with the corresponding bullish or bearish color. Additionally, traders can identify the top cycles with the highest amplitude or cycle strength and utilize them in conjunction with other technical indicators and fundamental analysis for trading decisions.
This indicator is considered a repainting indicator because the value of the indicator is calculated based on the past price data. As new price data becomes available, the indicator's value is recalculated, potentially causing the indicator's past values to change. This can create a false impression of the indicator's performance, as it may appear to have provided a profitable trading signal in the past when, in fact, that signal did not exist at the time.
The Goertzel indicator is also non-endpointed, meaning that it is not calculated up to the current bar or candle. Instead, it uses a fixed amount of historical data to calculate its values, which can make it difficult to use for real-time trading decisions. For example, if the indicator uses 100 bars of historical data to make its calculations, it cannot provide a signal until the current bar has closed and become part of the historical data. This can result in missed trading opportunities or delayed signals.
█ Conclusion
The Goertzel Browser indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Browser indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Browser indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
The first term represents the deviation of the data from the trend.
The second term represents the smoothness of the trend.
λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Volume SuperTrend AI (Expo)█ Overview
The Volume SuperTrend AI is an advanced technical indicator used to predict trends in price movements by utilizing a combination of traditional SuperTrend calculation and AI techniques, particularly the k-nearest neighbors (KNN) algorithm.
The Volume SuperTrend AI is designed to provide traders with insights into potential market trends, using both volume-weighted moving averages (VWMA) and the k-nearest neighbors (KNN) algorithm. By combining these approaches, the indicator aims to offer more precise predictions of price trends, offering bullish and bearish signals.
█ How It Works
Volume Analysis: By utilizing volume-weighted moving averages (VWMA), the Volume SuperTrend AI emphasizes the importance of trading volume in the trend direction, allowing it to respond more accurately to market dynamics.
Artificial Intelligence Integration - k-Nearest Neighbors (k-NN) Algorithm: The k-NN algorithm is employed to intelligently examine historical data points, measuring distances between current parameters and previous data. The nearest neighbors are utilized to create predictive modeling, thus adapting to intricate market patterns.
█ How to use
Trend Identification
The Volume SuperTrend AI indicator considers not only price movement but also trading volume, introducing an extra dimension to trend analysis. By integrating volume data, the indicator offers a more nuanced and robust understanding of market trends. When trends are supported by high trading volumes, they tend to be more stable and reliable. In practice, a green line displayed beneath the price typically suggests an upward trend, reflecting a bullish market sentiment. Conversely, a red line positioned above the price signals a downward trend, indicative of bearish conditions.
Trend Continuation signals
The AI algorithm is the fundamental component in the coloring of the Volume SuperTrend. This integration serves as a means of predicting the trend while preserving the inherent characteristics of the SuperTrend. By maintaining these essential features, the AI-enhanced Volume SuperTrend allows traders to more accurately identify and capitalize on trend continuation signals.
TrailingStop
The Volume SuperTrend AI indicator serves as a dynamic trailing stop loss, adjusting with both price movement and trading volume. This approach protects profits while allowing the trade room to grow, taking into account volume for a more nuanced response to market changes.
█ Settings
AI Settings:
Neighbors (k):
This setting controls the number of nearest neighbors to consider in the k-Nearest Neighbors (k-NN) algorithm. By adjusting this parameter, you can directly influence the sensitivity of the model to local fluctuations in the data. A lower value of k may lead to predictions that closely follow short-term trends but may be prone to noise. A higher value of k can provide more stable predictions, considering the broader context of market trends, but might lag in responsiveness.
Data (n):
This setting refers to the number of data points to consider in the model. It allows the user to define the size of the dataset that will be analyzed. A larger value of n may provide more comprehensive insights by considering a wider historical context but can increase computational complexity. A smaller value of n focuses on more recent data, possibly providing quicker insights but might overlook longer-term trends.
AI Trend Settings:
Price Trend & Prediction Trend:
These settings allow you to adjust the lengths of the weighted moving averages that are used to calculate both the price trend and the prediction trend. Shorter lengths make the trends more responsive to recent price changes, capturing quick market movements. Longer lengths smooth out the trends, filtering out noise, and highlighting more persistent market directions.
AI Trend Signals:
This toggle option enables or disables the trend signals generated by the AI. Activating this function may assist traders in identifying key trend shifts and opportunities for entry or exit. Disabling it may be preferred when focusing on other aspects of the analysis.
Super Trend Settings:
Length:
This setting determines the length of the SuperTrend, affecting how it reacts to price changes. A shorter length will produce a more sensitive SuperTrend, reacting quickly to price fluctuations. A longer length will create a smoother SuperTrend, reducing false alarms but potentially lagging behind real market changes.
Factor:
This parameter is the multiplier for the Average True Range (ATR) in SuperTrend calculation. By adjusting the factor, you can control the distance of the SuperTrend from the price. A higher factor makes the SuperTrend further from the price, giving more room for price movement but possibly missing shorter-term signals. A lower factor brings the SuperTrend closer to the price, making it more reactive but possibly more prone to false signals.
Moving Average Source:
This setting lets you choose the type of moving average used for the SuperTrend calculation, such as Simple Moving Average (SMA), Exponential Moving Average (EMA), etc.
Different types of moving averages provide various characteristics to the SuperTrend, enabling customization to align with individual trading strategies and market conditions.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
AI Trend Navigator [K-Neighbor]█ Overview
In the evolving landscape of trading and investment, the demand for sophisticated and reliable tools is ever-growing. The AI Trend Navigator is an indicator designed to meet this demand, providing valuable insights into market trends and potential future price movements. The AI Trend Navigator indicator is designed to predict market trends using the k-Nearest Neighbors (KNN) classifier.
By intelligently analyzing recent price actions and emphasizing similar values, it helps traders to navigate complex market conditions with confidence. It provides an advanced way to analyze trends, offering potentially more accurate predictions compared to simpler trend-following methods.
█ Calculations
KNN Moving Average Calculation: The core of the algorithm is a KNN Moving Average that computes the mean of the 'k' closest values to a target within a specified window size. It does this by iterating through the window, calculating the absolute differences between the target and each value, and then finding the mean of the closest values. The target and value are selected based on user preferences (e.g., using the VWAP or Volatility as a target).
KNN Classifier Function: This function applies the k-nearest neighbor algorithm to classify the price action into positive, negative, or neutral trends. It looks at the nearest 'k' bars, calculates the Euclidean distance between them, and categorizes them based on the relative movement. It then returns the prediction based on the highest count of positive, negative, or neutral categories.
█ How to use
Traders can use this indicator to identify potential trend directions in different markets.
Spotting Trends: Traders can use the KNN Moving Average to identify the underlying trend of an asset. By focusing on the k closest values, this component of the indicator offers a clearer view of the trend direction, filtering out market noise.
Trend Confirmation: The KNN Classifier component can confirm existing trends by predicting the future price direction. By aligning predictions with current trends, traders can gain more confidence in their trading decisions.
█ Settings
PriceValue: This determines the type of price input used for distance calculation in the KNN algorithm.
hl2: Uses the average of the high and low prices.
VWAP: Uses the Volume Weighted Average Price.
VWAP: Uses the Volume Weighted Average Price.
Effect: Changing this input will modify the reference values used in the KNN classification, potentially altering the predictions.
TargetValue: This sets the target variable that the KNN classification will attempt to predict.
Price Action: Uses the moving average of the closing price.
VWAP: Uses the Volume Weighted Average Price.
Volatility: Uses the Average True Range (ATR).
Effect: Selecting different targets will affect what the KNN is trying to predict, altering the nature and intent of the predictions.
Number of Closest Values: Defines how many closest values will be considered when calculating the mean for the KNN Moving Average.
Effect: Increasing this value makes the algorithm consider more nearest neighbors, smoothing the indicator and potentially making it less reactive. Decreasing this value may make the indicator more sensitive but possibly more prone to noise.
Neighbors: This sets the number of neighbors that will be considered for the KNN Classifier part of the algorithm.
Effect: Adjusting the number of neighbors affects the sensitivity and smoothness of the KNN classifier.
Smoothing Period: Defines the smoothing period for the moving average used in the KNN classifier.
Effect: Increasing this value would make the KNN Moving Average smoother, potentially reducing noise. Decreasing it would make the indicator more reactive but possibly more prone to false signals.
█ What is K-Nearest Neighbors (K-NN) algorithm?
At its core, the K-NN algorithm recognizes patterns within market data and analyzes the relationships and similarities between data points. By considering the 'K' most similar instances (or neighbors) within a dataset, it predicts future price movements based on historical trends. The K-Nearest Neighbors (K-NN) algorithm is a type of instance-based or non-generalizing learning. While K-NN is considered a relatively simple machine-learning technique, it falls under the AI umbrella.
We can classify the K-Nearest Neighbors (K-NN) algorithm as a form of artificial intelligence (AI), and here's why:
Machine Learning Component: K-NN is a type of machine learning algorithm, and machine learning is a subset of AI. Machine learning is about building algorithms that allow computers to learn from and make predictions or decisions based on data. Since K-NN falls under this category, it is aligned with the principles of AI.
Instance-Based Learning: K-NN is an instance-based learning algorithm. This means that it makes decisions based on the entire training dataset rather than deriving a discriminative function from the dataset. It looks at the 'K' most similar instances (neighbors) when making a prediction, hence adapting to new information if the dataset changes. This adaptability is a hallmark of intelligent systems.
Pattern Recognition: The core of K-NN's functionality is recognizing patterns within data. It identifies relationships and similarities between data points, something akin to human pattern recognition, a key aspect of intelligence.
Classification and Regression: K-NN can be used for both classification and regression tasks, two fundamental problems in machine learning and AI. The indicator code is used for trend classification, a predictive task that aligns with the goals of AI.
Simplicity Doesn't Exclude AI: While K-NN is often considered a simpler algorithm compared to deep learning models, simplicity does not exclude something from being AI. Many AI systems are built on simple rules and can be combined or scaled to create complex behavior.
No Explicit Model Building: Unlike traditional statistical methods, K-NN does not build an explicit model during training. Instead, it waits until a prediction is required and then looks at the 'K' nearest neighbors from the training data to make that prediction. This lazy learning approach is another aspect of machine learning, part of the broader AI field.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Adaptive Oscillator constructor [lastguru]Adaptive Oscillators use the same principle as Adaptive Moving Averages. This is an experiment to separate length generation from oscillators, offering multiple alternatives to be combined. Some of the combinations are widely known, some are not. Note that all Oscillators here are normalized to -1..1 range. This indicator is based on my previously published public libraries and also serve as a usage demonstration for them. I will try to expand the collection (suggestions are welcome), however it is not meant as an encyclopaedic resource , so you are encouraged to experiment yourself: by looking on the source code of this indicator, I am sure you will see how trivial it is to use the provided libraries and expand them with your own ideas and combinations. I give no recommendation on what settings to use, but if you find some useful setting, combination or application ideas (or bugs in my code), I would be happy to read about them in the comments section.
The indicator works in three stages: Prefiltering, Length Adaptation and Oscillators.
Prefiltering is a fast smoothing to get rid of high-frequency (2, 3 or 4 bar) noise.
Adaptation algorithms are roughly subdivided in two categories: classic Length Adaptations and Cycle Estimators (they are also implemented in separate libraries), all are selected in Adaptation dropdown. Length Adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle Estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly).
Chande (Price) - based on Chande's Dynamic Momentum Index (CDMI or DYMOI), which is dynamic RSI with this length
Chande (Volume) - a variant of Chande's algorithm, where volume is used instead of price
VIDYA - based on VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
VIDYA-RS - based on Vitali Apirine's modification of VIDYA algorithm (he calls it Relative Strength Moving Average). The period oscillates from the Upper Bound down (fast)
Kaufman Efficiency Scaling - based on Efficiency Ratio calculation originally used in KAMA
Deviation Scaling - based on DSSS by John F. Ehlers
Median Average - based on Median Average Adaptive Filter by John F. Ehlers
Fractal Adaptation - based on FRAMA by John F. Ehlers
MESA MAMA Alpha - based on MESA Adaptive Moving Average by John F. Ehlers
MESA MAMA Cycle - based on MESA Adaptive Moving Average by John F. Ehlers , but unlike Alpha calculation, this adaptation estimates cycle period
Pearson Autocorrelation* - based on Pearson Autocorrelation Periodogram by John F. Ehlers
DFT Cycle* - based on Discrete Fourier Transform Spectrum estimator by John F. Ehlers
Phase Accumulation* - based on Dominant Cycle from Phase Accumulation by John F. Ehlers
Length Adaptation usually take two parameters: Bound From (lower bound) and To (upper bound). These are the limits for Adaptation values. Note that the Cycle Estimators marked with asterisks(*) are very computationally intensive, so the bounds should not be set much higher than 50, otherwise you may receive a timeout error (also, it does not seem to be a useful thing to do, but you may correct me if I'm wrong).
The Cycle Estimators marked with asterisks(*) also have 3 checkboxes: HP (Highpass Filter), SS (Super Smoother) and HW (Hann Window). These enable or disable their internal prefilters, which are recommended by their author - John F. Ehlers . I do not know, which combination works best, so you can experiment.
Chande's Adaptations also have 3 additional parameters: SD Length (lookback length of Standard deviation), Smooth (smoothing length of Standard deviation) and Power ( exponent of the length adaptation - lower is smaller variation). These are internal tweaks for the calculation.
Oscillators section offer you a choice of Oscillator algorithms:
Stochastic - Stochastic
Super Smooth Stochastic - Super Smooth Stochastic (part of MESA Stochastic) by John F. Ehlers
CMO - Chande Momentum Oscillator
RSI - Relative Strength Index
Volume-scaled RSI - my own version of RSI. It scales price movements by the proportion of RMS of volume
Momentum RSI - RSI of price momentum
Rocket RSI - inspired by RocketRSI by John F. Ehlers (not an exact implementation)
MFI - Money Flow Index
LRSI - Laguerre RSI by John F. Ehlers
LRSI with Fractal Energy - a combo oscillator that uses Fractal Energy to tune LRSI gamma
Fractal Energy - Fractal Energy or Choppiness Index by E. W. Dreiss
Efficiency ratio - based on Kaufman Adaptive Moving Average calculation
DMI - Directional Movement Index (only ADX is drawn)
Fast DMI - same as DMI, but without secondary smoothing
If no Adaptation is selected (None option), you can set Length directly. If an Adaptation is selected, then Cycle multiplier can be set.
Before an Oscillator, a High Pass filter may be executed to remove cyclic components longer than the provided Highpass Length (no High Pass filter, if Highpass Length = 0). Both before and after the Oscillator a Moving Average can be applied. The following Moving Averages are included: SMA, RMA, EMA, HMA , VWMA, 2-pole Super Smoother, 3-pole Super Smoother, Filt11, Triangle Window, Hamming Window, Hann Window, Lowpass, DSSS. For more details on these Moving Averages, you can check my other Adaptive Constructor indicator:
The Oscillator output may be renormalized and postprocessed with the following Normalization algorithms:
Stochastic - Stochastic
Super Smooth Stochastic - Super Smooth Stochastic (part of MESA Stochastic) by John F. Ehlers
Inverse Fisher Transform - Inverse Fisher Transform
Noise Elimination Technology - a simplified Kendall correlation algorithm "Noise Elimination Technology" by John F. Ehlers
Except for Inverse Fisher Transform, all Normalization algorithms can have Length parameter. If it is not specified (set to 0), then the calculated Oscillator length is used.
More information on the algorithms is given in the code for the libraries used. I am also very grateful to other TradingView community members (they are also mentioned in the library code) without whom this script would not have been possible.
Impulse Reactor RSI-SMA Trend Indicator [ApexLegion]Impulse Reactor RSI-SMA Trend Indicator
Introduction and Theoretical Background
Design Rationale
Standard indicators frequently generate binary 'BUY' or 'SELL' signals without accounting for the broader market context. This often results in erratic "Flip-Flop" behavior, where signals are triggered indiscriminately regardless of the prevailing volatility regime.
Impulse Reactor was engineered to address this limitation by unifying two critical requirements: Quantitative Rigor and Execution Flexibility.
The Solution
Composite Analytical Framework This script is not a simple visual overlay of existing indicators. It is an algorithmic synthesis designed to function as a unified decision-making engine. The primary objective was to implement rigorous quantitative analysis (Volatility Normalization, Structural Filtering) directly within an alert-enabled framework. This architecture is designed to process signals through strict, multi-factor validation protocols before generating real-time notifications, allowing users to focus on structurally validated setups without manual monitoring.
How It Works
This is not a simple visual mashup. It utilizes a cross-validation algorithm where the Trend Structure acts as a gatekeeper for Momentum signals:
Logic over Lag: Unlike simple moving average crossovers, this script uses a 15-layer Gradient Ribbon to detect "Laminar Flow." If the ribbon is knotted (Compression), the system mathematically suppresses all signals.
Volatility Normalization: The core calculation adapts to ATR (Average True Range). This means the indicator automatically expands in volatile markets and contracts in quiet ones, maintaining accuracy without constant manual tweaking.
Adaptive Signal Thresholding: It incorporates an 'Anti-Greed' algorithm (Dynamic Thresholding) that automatically adjusts entry criteria based on trend duration. This logic aims to mitigate the risk of entering positions during periods of statistical trend exhaustion.
Why Use It?
Market State Decoding: The gradient Ribbon visualizes the underlying trend phase in real-time.
◦ Cyan/Blue Flow: Strong Bullish Trend (Laminar Flow).
◦ Magenta/Pink Flow: Strong Bearish Trend.
◦ Compressed/Knotted: When the ribbon lines are tightly squeezed or overlapping, it signals Consolidation. The system filters signals here to avoid chop.
Noise Reduction: The goal is not to catch every pivot, but to isolate high-confidence setups. The logic explicitly filters out minor fluctuations to help maintain position alignment with the broader trend.
⚖️ Chapter 1: System Architecture
Introduction: Composite Analytical Framework
System Overview
Impulse Reactor serves as a comprehensive technical analysis engine designed to synthesize three distinct market dimensions—Momentum, Volatility, and Trend Structure—into a unified decision-making framework. Unlike traditional methods that analyze these metrics in isolation, this system functions as a central processing unit that integrates disparate data streams to construct a coherent model of market behavior.
Operational Objective
The primary objective is to transition from single-dimensional signal generation to a multi-factor assessment model. By fusing data from the Impulse Core (Volatility), Gradient Oscillator (Momentum), and Structural Baseline (Trend), the system aims to filter out stochastic noise and identify high-probability trade setups grounded in quantitative confluence.
Market Microstructure Analysis: Limitations of Conventional Models
Extensive backtesting and quantitative analysis have identified three critical inefficiencies in standard oscillator-based strategies:
• Bounded Oscillator Limitations (The "Oscillation Trap"): Traditional indicators such as RSI or Stochastics are mathematically constrained between fixed values (0 to 100). In strong trending environments, these metrics often saturate in "overbought" or "oversold" zones. Consequently, traders relying on static thresholds frequently exit structurally valid positions prematurely or initiate counter-trend trades against prevailing momentum, resulting in suboptimal performance.
• Quantitative Blindness to Quality: Standard moving averages and trend indicators often fail to distinguish the qualitative nature of price movement. They treat low-volume drift and high-velocity expansion identically. This inability to account for "Volatility Quality" leads to delayed responsiveness during critical market events.
• Fractal Dissonance (Timeframe Disconnect): Financial markets exhibit fractal characteristics where trends on lower timeframes may contradict higher timeframe structures. Manual integration of multi-timeframe analysis increases cognitive load and susceptibility to human error, often resulting in conflicting biases at the point of execution.
Core Design Principles
To mitigate the aforementioned systemic inefficiencies, Impulse Reactor employs a modular architecture governed by three foundational principles:
Principle A:
Volatility Precursor Analysis Market mechanics demonstrate that volatility expansion often functions as a leading indicator for directional price movement. The system is engineered to detect "Volatility Deviation" — specifically, the divergence between short-term and long-term volatility baselines—prior to its manifestation in price action. This allows for entry timing aligned with the expansion phase of market volatility.
Principle B:
Momentum Density Visualization The system replaces singular momentum lines with a "Momentum Density" model utilizing a 15-layer Simple Moving Average (SMA) Ribbon.
• Concept: This visualization represents the aggregate strength and consistency of the trend.
• Application: A fully aligned and expanded ribbon indicates a robust trend structure ("Laminar Flow") capable of withstanding minor counter-trend noise, whereas a compressed ribbon signals consolidation or structural weakness.
Principle C:
Adaptive Confluence Protocols Signal validity is strictly governed by a multi-dimensional confluence logic. The system suppresses signal generation unless there is synchronized confirmation across all three analytical vectors:
1. Volatility: Confirmed expansion via the Impulse Core.
2. Momentum: Directional alignment via the Hybrid Oscillator.
3. Structure: Trend validation via the Baseline. This strict filtering mechanism significantly reduces false positives in non-trending (choppy) environments while maintaining sensitivity to genuine breakouts.
🔍 Chapter 2: Core Modules & Algorithmic Logic
Module A: Impulse Core (Normalized Volatility Deviation)
Operational Logic The Impulse Core functions as a volatility-normalized momentum gauge rather than a standard oscillator. It is designed to identify "Volatility Contraction" (Squeeze) and "Volatility Expansion" phases by quantifying the divergence between short-term and long-term volatility states.
Volatility Z-Score Normalization
The formula implements a custom normalization algorithm. Unlike standard oscillators that rely on absolute price changes, this logic calculates the Z-Score of the Volatility Spread.
◦ Numerator: (atr_f - atr_s) captures the raw momentum of volatility expansion.
◦ Denominator: (std_f + 1e-6) standardizes this value against historical variance.
◦ Result: This allows the indicator scales consistently across assets (e.g., Bitcoin vs. Euro) without manual recalibration.
f_impulse() =>
atr_f = ta.atr(fastLen) // Fast Volatility Baseline
atr_s = ta.atr(slowLen) // Slow Volatility Baseline
std_f = ta.stdev(atr_f, devLen) // Volatility Standard Deviation
(atr_f - atr_s) / (std_f + 1e-6) // Normalized Differential Calculation
Algorithmic Framework
• Differential Calculation: The system computes the spread between a Fast Volatility Baseline (ATR-10) and a Slow Volatility Baseline (ATR-30).
• Normalization Protocol: To standardize consistency across diverse asset classes (e.g., Forex vs. Crypto), the raw differential is divided by the standard deviation of the volatility itself over a 30-period lookback.
• Signal Generation:
◦ Contraction (Squeeze): When the Fast ATR compresses below the Slow ATR, it registers a potential volatility buildup phase.
◦ Expansion (Release): A rapid divergence of the Fast ATR above the Slow ATR signals a confirmed volatility expansion, validating the strength of the move.
Module B: Gradient Oscillator (RSI-SMA Hybrid)
Design Rationale To mitigate the "noise" and "false reversal" signals common in single-line oscillators (like standard RSI), this module utilizes a 15-Layer Gradient Ribbon to visualize momentum density and persistence.
Technical Architecture
• Ribbon Array: The system generates 15 sequential Simple Moving Averages (SMA) applied to a volatility-adjusted RSI source. The length of each layer increases incrementally.
• State Analysis:
Momentum Alignment (Laminar Flow): When all 15 layers are expanded and parallel, it indicates a robust trend where buying/selling pressure is distributed evenly across multiple timeframes. This state helps filter out premature "overbought/oversold" signals.
• Consolidation (Compression): When the distance between the fastest layer (Layer 1) and the slowest layer (Layer 15) approaches zero or the layers intersect, the system identifies a "Non-Tradable Zone," preventing entries during choppy market conditions.
// Laminar Flow Validation
f_validate_trend() =>
// Calculate spread between Ribbon layers
ribbon_spread = ta.stdev(ribbon_array, 15)
// Only allow signals if Ribbon is expanded (Laminar Flow)
is_flowing = ribbon_spread > min_expansion_threshold
// If compressed (Knotted), force signal to false
is_flowing ? signal : na
Module C: Adaptive Signal Filtering (Behavioral Bias Mitigation)
This subsystem, operating as an algorithmic "Anti-Greed" Mechanism, addresses the statistical tendency for signal degradation following prolonged trends.
Dynamic Threshold Adjustment
• Win Streak Detection: The algorithm internally tracks the outcome of closed trade cycles.
• Sensitivity Multiplier: Upon detecting consecutive successful signals in the same direction, a Penalty_Factor is applied to the entry logic.
• Operational Impact: This effectively raises the Required_Slope threshold for subsequent signals. For example, after three consecutive bullish signals, the system requires a 30% steeper trend angle to validate a fourth entry. This enforces stricter discipline during extended trends to reduce the probability of entering at the point of trend exhaustion.
Anti-Greed Logic: Dynamic Threshold Calculation
f_adjust_threshold(base_slope, win_streak) =>
// Adds a 10% penalty to the difficulty for every consecutive win
penalty_factor = 0.10
risk_scaler = 1 + (win_streak * penalty_factor)
// Returns the new, harder-to-reach threshold
base_slope * risk_scaler
Module D: Trend Baseline (Triple-Smoothed Structure)
The Trend Baseline serves as the structural filter for all signals. It employs a Triple-Smoothed Hybrid Algorithm designed to balance lag reduction with noise filtration.
Smoothing Stages
1. Volatility Banding: Utilizes a SuperTrend-based calculation to establish the upper and lower boundaries of price action.
2. Weighted Filter: Applies a Weighted Moving Average (WMA) to prioritize recent price data.
3. Exponential Smoothing: A final Exponential Moving Average (EMA) pass is applied to create a seamless baseline curve.
Functionality
This "Heavy" baseline resists minor intraday volatility spikes while remaining responsive to sustained structural shifts. A signal is only considered valid if the price action maintains structural integrity relative to this baseline
🚦 Chapter 3: Risk Management & Exit Protocols
Quantitative Risk Management (TP/SL & Trailing)
Foundational Architecture: Volatility-Adjusted Geometry Unlike strategies relying on static nominal values, Impulse Reactor establishes dynamic risk boundaries derived from quantitative volatility metrics. This design aligns trade invalidation levels mathematically with the current market regime.
• ATR-Based Dynamic Bracketing:
The protocol calculates Stop-Loss and Take-Profit levels by applying Fibonacci coefficients (Default: 0.786 for SL / 1.618 for TP) to the Average True Range (ATR).
◦ High Volatility Environments: The risk bands automatically expand to accommodate wider variance, preventing premature exits caused by standard market noise.
◦ Low Volatility Environments: The bands contract to tighten risk parameters, thereby dynamically adjusting the Risk-to-Reward (R:R) geometry.
• Close-Validation Protocol ("Soft Stop"):
Institutional algorithms frequently execute liquidity sweeps—driving prices briefly below key support levels to accumulate inventory.
◦ Mechanism: When the "Soft Stop" feature is enabled, the system filters out intraday volatility spikes. The stop-loss is conditional; execution is triggered only if the candle closes beyond the invalidation threshold.
◦ Strategic Advantage: This logic distinguishes between momentary price wicks and genuine structural breakdowns, preserving positions during transient volatility.
• Step-Function Trailing Mechanism:
To protect unrealized PnL while allowing for normal price breathing, a two-phase trailing methodology is employed:
◦ Phase 1 (Activation): The trailing function remains dormant until the price advances by a pre-defined percentage threshold.
◦ Phase 2 (Dynamic Floor): Once armed, the stop level creates a moving floor, adjusting relative to price action while maintaining a volatility-based (ATR) buffer to systematically protect unrealized PnL.
• Algorithmic Exit Protocols (Dynamic Liquidity Analysis)
◦ Rationale: Inefficiencies of Static Targets Static "Take Profit" levels often result in suboptimal exits. They compel traders to close positions based on arbitrary figures rather than evolving market structure, potentially capping upside during significant trends or retaining positions while the underlying trend structure deteriorates.
◦ Solution: Structural Integrity Assessment The system utilizes a Dynamic Liquidity Engine to continuously audit the validity of the position. Instead of targeting a specific price point, the algorithm evaluates whether the trend remains statistically robust.
Multi-Factor Exit Logic (The Tri-Vector System)
The Smart Exit protocol executes only when specific algorithmic invalidation criteria are met:
• 1. Momentum Exhaustion (Confluence Decay): The system monitors a 168-hour rolling average of the Confluence Score. A significant deviation below this historical baseline indicates momentum exhaustion, signaling that the driving force behind the trend has dissipated prior to a price reversal. This enables preemptive exits before a potential drawdown.
• 2. Statistical Over-Extension (Mean Reversion): Utilizing the core volatility logic, the system identifies instances where price deviates beyond 2.0 standard deviations from the mean. While the trend may be technically bullish, this statistical anomaly suggests a high probability of mean reversion (elastic snap-back), triggering a defensive exit to capitalize on peak valuation.
• 3. Oscillator Rejection (Immediate Pivot): To manage sudden V-shaped volatility, the system monitors RSI pivots. If a sharp "Pivot High" or divergence is detected, the protocol triggers an immediate "Peak Exit," bypassing standard trend filters to secure liquidity during high-velocity reversals.
🎨 Chapter 4: Visualization Guide
Gradient Oscillator Ribbon
The 15-layer SMA ribbon visualized via plot(r1...r15) represents the "Momentum Density" of the market.
• Visuals:
◦ Cyan/Blue Ribbon: Indicates Bullish Momentum.
◦ Pink/Magenta Ribbon: Indicates Bearish Momentum.
• Interpretation:
◦ Laminar Flow: When the ribbon expands widely and flows in parallel, it signifies a robust trend where momentum is distributed evenly across timeframes. This is the ideal state for trend-following.
◦ Compression (Consolidation): If the ribbon becomes narrow, twisted, or knotted, it indicates a "Non-Tradable Zone" where the market lacks a unified direction. Traders are advised to wait for clarity.
◦ Over-Extension: If the top layer crosses the Overbought (85) or Oversold (15) lines, it visually warns of potential market overheating.
Trend Baseline
The thick, color-changing line plotted via plot(baseline) represents the Structural Backbone of the market.
• Visuals: Changes color based on the trend direction (Blue for Bullish, Pink for Bearish).
• Interpretation:
Structural Filter: Long positions are statistically favored only when price action sustains above this baseline, while short positions are favored below it.
Dynamic Support/Resistance: The baseline acts as a dynamic support level during uptrends and resistance during downtrends.
Entry Signals & Labels
Text labels ("Long Entry", "Short Entry") appear when the system detects high-probability setups grounded in quantitative confluence.
• Visuals: Labeled signals appear above/below specific candles.
• Interpretation:
These signals represent moments where Volatility (Expansion), Momentum (Alignment), and Structure (Trend) are synchronized.
Smart Exit: Labels such as "Smart Exit" or "Peak Exit" appear when the system detects momentum exhaustion or structural decay, prompting a defensive exit to preserve capital.
Dynamic TP/SL Boxes
The semi-transparent colored zones drawn via fill() represent the risk management geometry.
• Visuals: Colored boxes extending from the entry point to the Take Profit (TP) and Stop Loss (SL) levels.
• Function:
Volatility-Adjusted Geometry: Unlike static price targets, these boxes expand during high volatility (to prevent wicks from stopping you out) and contract during low volatility (to optimize Risk-to-Reward ratios).
SAR + MACD Glow
Small glowing shapes appearing above or below candles.
• Visuals: Triangle or circle glows near the price bars.
• Interpretation:
This visual indicates a secondary confirmation where Parabolic SAR and MACD align with the main trend direction. It serves as an additional confluence factor to increase confidence in the trade setup.
Support/Resistance Table
A small table located at the bottom-right of the chart.
• Function: Automatically identifies and displays recent Pivot Highs (Resistance) and Pivot Lows (Support).
• Interpretation: These levels can be used as potential targets for Take Profit or invalidation points for manual Stop Loss adjustments.
🖥️ Chapter 5: Dashboard & Operational Guide
Integrated Analytics Panel (Dashboard Overview)
To facilitate rapid decision-making without manual calculation, the system aggregates critical market dimensions into a unified "Heads-Up Display" (HUD). This panel monitors real-time metrics across multiple timeframes and analytical vectors.
A. Intermediate Structure (12H Trend)
• Function: Anchors the intraday analysis to the broader market structure using a 12-hour rolling window.
• Interpretation:
◦ Bullish (> +0.5%): Indicates a positive structural bias. Long setups align with the macro flow.
◦ Bearish (< -0.5%): Indicates structural weakness. Short setups are statistically favored.
◦ Neutral: Represents a ranging environment where the Confluence Score becomes the primary weighting factor.
B. Composite Confluence Score (Signal Confidence)
• Definition: A probability metric derived from the synchronization of Volatility (Impulse Core), Momentum (Ribbon), and Trend (Baseline).
• Grading Scale:
Strong Buy/Sell (> 7.0 / < 3.0): Indicates full alignment across all three vectors. Represents a "Prime Setup" eligible for standard position sizing.
Buy/Sell (5.0–7.0 / 3.0–5.0): Indicates a valid trend but with moderate volatility confirmation.
Neutral: Signals conflicting data (e.g., Bullish Momentum vs. Bearish Structure). Trading is not recommended ("No-Trade Zone").
C. Statistical Deviation Status (Mean Reversion)
• Logic: Utilizes Bollinger Band deviation principles to quantify how far price has stretched from the statistical mean (20 SMA).
• Alert States:
Over-Extended (> 2.0 SD): Warning that price is statistically likely to revert to the mean (Elastic Snap-back), even if the trend remains technically valid. New entries are discouraged in this zone.
Normal: Price is within standard distribution limits, suitable for trend-following entries.
D. Volatility Regime Classification
• Metric: Compares current ATR against a 100-period historical baseline to categorize the market state.
• Regimes:
Low Volatility (Lvl < 1.0): Market Compression. Often precedes volatility expansion events.
Mid Volatility (Lvl 1.0 - 1.5): Standard operating environment.
High Volatility (Lvl > 1.5): Elevated market stress. Risk parameters should be adjusted (e.g., reduced position size) to account for increased variance.
E. Performance Telemetry
• Function: Displays the historical reliability of the Trend Baseline for the current asset and timeframe.
• Operational Threshold: If the displayed Win Rate falls below 40%, it suggests the current market behavior is incoherent (choppy) and does not respect trend logic. In such cases, switching assets or timeframes is recommended.
Operational Protocols & Signal Decoding
Visual Interpretation Standards
• Laminar Flow (Trade Confirmation): A valid trend is visually confirmed when the 15-layer SMA Ribbon is fully expanded and parallel. This indicates distributed momentum across timeframes.
• Consolidation (No-Trade): If the ribbon appears twisted, knotted, or compressed, the market lacks a unified directional vector.
• Baseline Interaction: The Triple-Smoothed Baseline acts as a dynamic support/resistance filter. Long positions remain valid only while price sustains above this structure.
System Calibration (Settings)
• Adaptive Signal Filtering (Prev. Anti-Greed): Enabled by default. This logic automatically raises the required trend slope threshold following consecutive wins to mitigate behavioral bias.
• Impulse Sensitivity: Controls the reactivity of the Volatility Core. Higher settings capture faster moves but may introduce more noise.
⚙️ Chapter 6: System Configuration & Alert Guide
This section provides a complete breakdown of every adjustable setting within Impulse Reactor to assist you in tailoring the engine to your specific needs.
🌐 LANGUAGE SETTINGS (Localization)
◦ Select Language (Default: English):
Function: Instantly translates all chart labels, dashboard texts into your preferred language.
Supported: English, Korean, Chinese, Spanish
⚡ IMPULSE CORE SETTINGS (Volatility Engine)
◦ Deviation Lookback (Default: 30): The period used to calculate the standard deviation of volatility.
Role: Sets the baseline for normalizing momentum. Higher values make the core smoother but slower to react.
◦ Fast Pulse Length (Default: 10): The short-term ATR period.
Role: Detects rapid volatility expansion.
◦ Slow Pulse Length (Default: 30): The long-term ATR baseline.
Role: Establishes the background volatility level. The core signal is derived from the divergence between Fast and Slow pulses.
🎯 TP/SL SETTINGS (Risk Management)
◦ SL/TP Fibonacci (Default: 0.786 / 1.618): Selects the Fibonacci ratio used for risk calculation.
◦ SL/TP Multiplier (Default: 1.5 / 2): Applies a multiplier to the ATR-based bands.
Role: Expands or contracts the Take Profit and Stop Loss boxes. Increase these values for higher volatility assets (like Altcoins) to avoid premature stop-outs.
◦ ATR Length (Default: 14): The lookback period for calculating the Average True Range used in risk geometry.
◦ Use Soft Stop (Close Basis):
Role: If enabled, Stop Loss alerts only trigger if a candle closes beyond the invalidation level. This prevents being stopped out by wick manipulations.
🔊 RIBBON SETTINGS (Momentum Visualization)
◦ Show SMA Ribbon: Toggles the visibility of the 15-layer gradient ribbon.
◦ Ribbon Line Count (Default: 15): The number of SMA lines in the ribbon array.
◦ Ribbon Start Length (Default: 2) & Step (Default: 1): Defines the spread of the ribbon.
Role: Controls the "thickness" of the momentum density visualization. A wider step creates a broader ribbon, useful for higher timeframes.
📎 DISPLAY OPTIONS
◦ Show Entry Lines / TP/SL Box / Position Labels / S/R Levels / Dashboard: Toggles individual visual elements on the chart to reduce clutter.
◦ Show SAR+MACD Glow: Enables the secondary confirmation shapes (triangles/circles) above/below candles.
📈 TREND BASELINE (Structural Filter)
◦ Supertrend Factor (Default: 12) & ATR Period (Default: 90): Controls the sensitivity of the underlying Supertrend algorithm used for the baseline calculation.
◦ WMA Length (40) & EMA Length (14): The smoothing periods for the Triple-Smoothed Baseline.
◦ Min Trend Duration (Default: 10): The minimum number of bars the trend must be established before a signal is considered valid.
🧠 SMART EXIT (Dynamic Liquidity)
◦ Use Smart Exit: Enables the momentum exhaustion logic.
◦ Exit Threshold Score (Default: 3): The sensitivity level for triggering a Smart Exit. Lower values trigger earlier exits.
◦ Average Period (168) & Min Hold Bars (5): Defines the rolling window for momentum decay analysis and the minimum duration a trade must be held before Smart Exit logic activates.
🛡️ TRAILING STOP (Step)
◦ Use Trailing Stop: Activates the step-function trailing mechanism.
◦ Step 1 Activation % (0.5) & Offset % (0.5): The price must move 0.5% in your favor to arm the first trail level, which sets a stop 0.5% behind price.
◦ Step 2 Activation % (1) & Offset % (0.2): Once price moves 1%, the trail tightens to 0.2%, securing the position.
🌀 SAR & MACD SETTINGS (Secondary Confirmation)
◦ SAR Start/Increment/Max: Standard Parabolic SAR parameters.
◦ SAR Score Scaling (ATR): Adjusts how much weight the SAR signal has in the overall confluence score.
◦ MACD Fast/Slow/Signal: Standard MACD parameters used for the "Glow" signals.
🔄 ANTI-GREED LOGIC (Behavioral Bias)
◦ Strict Entry after Win: Enables the negative feedback loop.
◦ Strict Multiplier (Default: 1.1): Increases the entry difficulty by 10% after each win.
Role: Prevents overtrading and entering at the top of an extended trend.
🌍 HTF FILTER (Multi-Timeframe)
◦ Use Auto-Adaptive HTF Filter: Automatically selects a higher timeframe (e.g., 1H -> 4H) to filter signals.
◦ Bypass HTF on Steep Trigger: Allows an entry even against the HTF trend if the local momentum slope is exceptionally steep (catch powerful reversals).
📉 RSI PEAK & CHOPPINESS
◦ RSI Peak Exit (Instant): Triggers an immediate exit if a sharp RSI pivot (V-shape) is detected.
◦ Choppiness Filter: Suppresses signals if the Choppiness Index is above the threshold (Default: 60), indicating a flat market.
📐 SLOPE TRIGGER LOGIC
◦ Force Entry on Steep Slope: Overrides other filters if the price angle is extremely vertical (high velocity).
◦ Slope Sensitivity (1.5): The angle required to trigger this override.
⛔ FLAT MARKET FILTER (ADX & ATR)
◦ Use ADX Filter: Blocks signals if ADX is below the threshold (Default: 20), indicating no trend.
◦ Use ATR Flat Filter: Blocks signals if volatility drops below a critical level (dead market).
🔔 Alert Configuration Guide
Impulse Reactor is designed with a comprehensive suite of alert conditions, allowing you to automate your trading or receive real-time notifications for specific market events.
How to Set Up:
Click the "Alert" (Clock) icon in the TradingView toolbar.
Select "Impulse Reactor " from the Condition dropdown.
Choose one of the specific trigger conditions below:
🚀 Entry Signals (Trend Initiation)
Long Entry:
Trigger: Fires when a confirmed Bullish Setup is detected (Momentum + Volatility + Structure align).
Usage: Use this to enter new Long positions.
Short Entry:
Trigger: Fires when a confirmed Bearish Setup is detected.
Usage: Use this to enter new Short positions.
🎯 Profit Taking (Target Levels)
Long TP:
Trigger: Fires when price hits the calculated Take Profit level for a Long trade.
Usage: Automate partial or full profit taking.
Short TP:
Trigger: Fires when price hits the calculated Take Profit level for a Short trade.
Usage: Automate partial or full profit taking.
🛡️ Defensive Exits (Risk Management)
Smart Exit:
Trigger: Fires when the system detects momentum decay or statistical exhaustion (even if the trend hasn't fully reversed).
Usage: Recommended for tightening stops or closing positions early to preserve gains.
Overbought / Oversold:
Trigger: Fires when the ribbon extends into extreme zones.
Usage: Warning signal to prepare for a potential reversal or pullback.
💡 Secondary Confirmation (Confluence)
SAR+MACD Bullish:
Trigger: Fires when Parabolic SAR and MACD align bullishly with the main trend.
Usage: Ideal for Pyramiding (adding to an existing winning position).
SAR+MACD Bearish:
Trigger: Fires when Parabolic SAR and MACD align bearishly.
Usage: Ideal for adding to short positions.
⚠️ Chapter 7: Conclusion & Risk Disclosure
Methodological Synthesis
Impulse Reactor represents a shift from reactive price tracking to proactive energy analysis. By decomposing market activity into its atomic components — Volatility, Momentum, and Structure — and reconstructing them into a coherent decision model, the system aims to provide a quantitative framework for market engagement. It is designed not to predict the future, but to identify high-probability conditions where kinetic energy and trend structure align.
Disclaimer & Risk Warnings
◦ Educational Purpose Only
This indicator, including all associated code, documentation, and visual outputs, is provided strictly for educational and informational purposes. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments.
◦ No Guarantee of Performance
Past performance is not indicative of future results. All metrics displayed on the dashboard (including "Win Rate" and "P&L") are theoretical calculations based on historical data. These figures do not account for real-world trading factors such as slippage, liquidity gaps, spread costs, or broker commissions.
◦ High-Risk Warning
Trading cryptocurrencies, futures, and leveraged financial products involves a substantial risk of loss. The use of leverage can amplify both gains and losses. Users acknowledge that they are solely responsible for their trading decisions and should conduct independent due diligence before executing any trades.
◦ Software Limitations
The software is provided "as is" without warranty. Users should be aware that market data feeds on analysis platforms may experience latency or outages, which can affect signal generation accuracy.
Goertzel Cycle Period [Loxx]Goertzel Cycle Period is an indicator that uses Goertzel algorithm to extract the cycle period of ticker's price input to then be injected into advanced, adaptive indicators and technical analysis algorithms.
The following information is extracted from: "MESA vs Goertzel-DFT, 2003 by Dennis Meyers"
Background
MESA which stands for Maximum Entropy Spectral Analysis is a widely used mathematical technique designed to find the frequencies present in data. MESA was developed by J.P Burg for his Ph.D dissertation at Stanford University in 1975. The use of the MESA technique for stocks has been written about in many articles and has been popularized as a trading technique by John Ehlers.
The Fourier Transform is a mathematical technique named after the famed French mathematician Jean Baptiste Joseph Fourier 1768-1830. In its digital form, namely the discrete-time Fourier Transform (DFT) series, is a widely used mathematical technique to find the frequencies of discrete time sampled data. The use of the DFT has been written about in many articles in this magazine (see references section).
Today, both MESA and DFT are widely used in science and engineering in digital signal processing. The application of MESA and Fourier mathematical techniques are prevalent in our everyday life from everything from television to cell phones to wireless internet to satellite communications.
MESA Advantages & Disadvantage
MESA is a mathematical technique that calculates the frequencies of a time series from the autoregressive coefficients of the time series. We have all heard of regression. The simplest regression is the straight line regression of price against time where price(t) = a+b*t and where a and b are calculated such that the square of the distance between price and the best fit straight line is minimized (also called least squares fitting). With autoregression we attempt to predict tomorrows price by a linear combination of M past prices.
One of the major advantages of MESA is that the frequency examined is not constrained to multiples of 1/N (1/N is equal to the DFT frequency spacing and N is equal to the number of sample points). For instance with the DFT and N data points we can only look a frequencies of 1/N, 2/N, Ö.., 0.5. With MESA we can examine any frequency band within that range and any frequency spacing between i/N and (i+1)/N . For example, if we had 100 bars of price data, we might be interested in looking for all cycles between 3 bars per cycle and 30 bars/ cycle only and with a frequency spacing of 0.5 bars/cycle. DFT would examine all bars per cycle of between 2 and 50 with a frequency spacing constrained to 1/100.
Another of the major advantages of MESA is that the dominant spectral (frequency) peaks of the price series, if they exist, can be identified with fewer samples than the DFT technique. For instance if we had a 10 bar price period and a high signal to noise ratio we could accurately identify this period with 40 data samples using the MESA technique. This same resolution might take 128 samples for the DFT. One major disadvantage of the MESA technique is that with low signal to noise ratios, that is below 6db (signal amplitude/noise amplitude < 2), the ability of MESA to find the dominant frequency peaks is severely diminished.(see Kay, Ref 10, p 437). With noisy price series this disadvantage can become a real problem. Another disadvantage of MESA is that when the dominant frequencies are found another procedure has to be used to get the amplitude and phases of these found frequencies. This two stage process can make MESA much slower than the DFT and FFT . The FFT stands for Fast Fourier Transform. The Fast Fourier Transform(FFT) is a computationally efficient algorithm which is a designed to rapidly evaluate the DFT. We will show in examples below the comparisons between the DFT & MESA using constructed signals with various noise levels.
DFT Advantages and Disadvantages.
The mathematical technique called the DFT takes a discrete time series(price) of N equally spaced samples and transforms or converts this time series through a mathematical operation into set of N complex numbers defined in what is called the frequency domain. Why would we what to do that? Well it turns out that we can do all kinds of neat analysis tricks in the frequency domain which are just to hard to do, computationally wise, with the original price series in the time domain. If we make the assumption that the price series we are examining is made up of signals of various frequencies plus noise, than in the frequency domain we can easily filter out the frequencies we have no interest in and minimize the noise in the data. We could then transform the resultant back into the time domain and produce a filtered price series that hopefully would be easier to trade. The advantages of the DFT and itís fast computation algorithm the FFT, are that it is extremely fast in calculating the frequencies of the input price series. In addition it can determine frequency peaks for very noisy price series even when the signal amplitude is less than the noise amplitude. One of the disadvantages of the FFT is that straight line, parabolic trends and edge effects in the price series can distort the frequency spectrum. In addition, end effects in the price series can distort the frequency spectrum. Another disadvantage of the FFT is that it needs a lot more data than MESA for spectral resolution. However this disadvantage has largely been nullified by the speed of today's computers.
Goertzel algorithm attempts to resolve these problems...
What is the Goertzel algorithm?
The Goertzel algorithm is a technique in digital signal processing (DSP) for efficient evaluation of the individual terms of the discrete Fourier transform (DFT). It is useful in certain practical applications, such as recognition of dual-tone multi-frequency signaling (DTMF) tones produced by the push buttons of the keypad of a traditional analog telephone. The algorithm was first described by Gerald Goertzel in 1958.
Like the DFT, the Goertzel algorithm analyses one selectable frequency component from a discrete signal. Unlike direct DFT calculations, the Goertzel algorithm applies a single real-valued coefficient at each iteration, using real-valued arithmetic for real-valued input sequences. For covering a full spectrum, the Goertzel algorithm has a higher order of complexity than fast Fourier transform (FFT) algorithms, but for computing a small number of selected frequency components, it is more numerically efficient. The simple structure of the Goertzel algorithm makes it well suited to small processors and embedded applications.
The main calculation in the Goertzel algorithm has the form of a digital filter, and for this reason the algorithm is often called a Goertzel filter
Where is Goertzel algorithm used?
This package contains the advanced mathematical technique called the Goertzel algorithm for discrete Fourier transforms. This mathematical technique is currently used in today's space-age satellite and communication applications and is applied here to stock and futures trading.
While the mathematical technique called the Goertzel algorithm is unknown to many, this algorithm is used everyday without even knowing it. When you press a cell phone button have you ever wondered how the telephone company knows what button tone you pushed? The answer is the Goertzel algorithm. This algorithm is built into tiny integrated circuits and immediately detects which of the 12 button tones(frequencies) you pushed.
Future Additions:
Bartels test for cycle significance, testing output cycles for utility
Hodrick Prescott Detrending, smoothing
Zero-Lag Regression Detrending, smoothing
High-pass or Double WMA filtering of source input price data
References:
1. Burg, J. P., ëMaximum Entropy Spectral Analysisî, Ph.D. dissertation, Stanford University, Stanford, CA. May 1975.
2. Kay, Steven M., ìModern Spectral Estimationî, Prentice Hall, 1988
3. Marple, Lawrence S. Jr., ìDigital Spectral Analysis With Applicationsî, Prentice Hall, 1987
4. Press, William H., et al, ìNumerical Receipts in C++: the Art of Scientific Computingî,
Cambridge Press, 2002.
5. Oppenheim, A, Schafer, R. and Buck, J., ìDiscrete Time Signal Processingî, Prentice Hall,
1996, pp663-634
6. Proakis, J. and Manolakis, D. ìDigital Signal Processing-Principles, Algorithms and
Applicationsî, Prentice Hall, 1996., pp480-481
7. Goertzel, G., ìAn Algorithm for he evaluation of finite trigonometric seriesî American Math
Month, Vol 65, 1958 pp34-35.
Machine Learning Momentum Index (MLMI) [Zeiierman]█ Overview
The Machine Learning Momentum Index (MLMI) represents the next step in oscillator trading. By blending traditional momentum analysis with machine learning, MLMI delivers a potent and dynamic tool that aligns with the complexities of modern financial landscapes. Offering traders an adaptive way to understand and act on market momentum and trends, this oscillator provides real-time insights into market momentum and prevailing trends.
█ How It Works:
Momentum Analysis: MLMI employs a dual-layer analysis, utilizing quick and slow weighted moving averages (WMA) of the Relative Strength Index (RSI) to gauge the market's momentum and direction.
Machine Learning Integration: Through the k-Nearest Neighbors (k-NN) algorithm, MLMI intelligently examines historical data to make more accurate momentum predictions, adapting to the intricate patterns of the market.
MLMI's precise calculation involves:
Weighted Moving Averages: Calculations of quick (5-period) and slow (20-period) WMAs of the RSI to track short-term and long-term momentum.
k-Nearest Neighbors Algorithm: Distances between current parameters and previous data are measured, and the nearest neighbors are used for predictive modeling.
Trend Analysis: Recognition of prevailing trends through the relationship between quick and slow-moving averages.
█ How to use
The Machine Learning Momentum Index (MLMI) can be utilized in much the same way as traditional trend and momentum oscillators, providing key insights into market direction and strength. What sets MLMI apart is its integration of artificial intelligence, allowing it to adapt dynamically to market changes and offer a more nuanced and responsive analysis.
Identifying Trend Direction and Strength: The MLMI serves as a tool to recognize market trends, signaling whether the momentum is upward or downward. It also provides insights into the intensity of the momentum, helping traders understand both the direction and strength of prevailing market trends.
Identifying Consolidation Areas: When the MLMI Prediction line and the WMA of the MLMI Prediction line become flat/oscillate around the mid-level, it's a strong sign that the market is in a consolidation phase. This insight from the MLMI allows traders to recognize periods of market indecision.
Recognizing Overbought or Oversold Conditions: By identifying levels where the market may be overbought or oversold, MLMI offers insights into potential price corrections or reversals.
█ Settings
Prediction Data (k)
This parameter controls the number of neighbors to consider while making a prediction using the k-Nearest Neighbors (k-NN) algorithm. By modifying the value of k, you can change how sensitive the prediction is to local fluctuations in the data.
A smaller value of k will make the prediction more sensitive to local variations and can lead to a more erratic prediction line.
A larger value of k will consider more neighbors, thus making the prediction more stable but potentially less responsive to sudden changes.
Trend length
This parameter controls the length of the trend used in computing the momentum. This length refers to the number of periods over which the momentum is calculated, affecting how quickly the indicator reacts to changes in the underlying price movements.
A shorter trend length (smaller momentumWindow) will make the indicator more responsive to short-term price changes, potentially generating more signals but at the risk of more false alarms.
A longer trend length (larger momentumWindow) will make the indicator smoother and less responsive to short-term noise, but it may lag in reacting to significant price changes.
Please note that the Machine Learning Momentum Index (MLMI) might not be effective on higher timeframes, such as daily or above. This limitation arises because there may not be enough data at these timeframes to provide accurate momentum and trend analysis. To overcome this challenge and make the most of what MLMI has to offer, it's recommended to use the indicator on lower timeframes.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
loxxfftLibrary "loxxfft"
This code is a library for performing Fast Fourier Transform (FFT) operations. FFT is an algorithm that can quickly compute the discrete Fourier transform (DFT) of a sequence. The library includes functions for performing FFTs on both real and complex data. It also includes functions for fast correlation and convolution, which are operations that can be performed efficiently using FFTs. Additionally, the library includes functions for fast sine and cosine transforms.
Reference:
www.alglib.net
fastfouriertransform(a, nn, inversefft)
Returns Fast Fourier Transform
Parameters:
a (float ) : float , An array of real and imaginary parts of the function values. The real part is stored at even indices, and the imaginary part is stored at odd indices.
nn (int) : int, The number of function values. It must be a power of two, but the algorithm does not validate this.
inversefft (bool) : bool, A boolean value that indicates the direction of the transformation. If True, it performs the inverse FFT; if False, it performs the direct FFT.
Returns: float , Modifies the input array a in-place, which means that the transformed data (the FFT result for direct transformation or the inverse FFT result for inverse transformation) will be stored in the same array a after the function execution. The transformed data will have real and imaginary parts interleaved, with the real parts at even indices and the imaginary parts at odd indices.
realfastfouriertransform(a, tnn, inversefft)
Returns Real Fast Fourier Transform
Parameters:
a (float ) : float , A float array containing the real-valued function samples.
tnn (int) : int, The number of function values (must be a power of 2, but the algorithm does not validate this condition).
inversefft (bool) : bool, A boolean flag that indicates the direction of the transformation (True for inverse, False for direct).
Returns: float , Modifies the input array a in-place, meaning that the transformed data (the FFT result for direct transformation or the inverse FFT result for inverse transformation) will be stored in the same array a after the function execution.
fastsinetransform(a, tnn, inversefst)
Returns Fast Discrete Sine Conversion
Parameters:
a (float ) : float , An array of real numbers representing the function values.
tnn (int) : int, Number of function values (must be a power of two, but the code doesn't validate this).
inversefst (bool) : bool, A boolean flag indicating the direction of the transformation. If True, it performs the inverse FST, and if False, it performs the direct FST.
Returns: float , The output is the transformed array 'a', which will contain the result of the transformation.
fastcosinetransform(a, tnn, inversefct)
Returns Fast Discrete Cosine Transform
Parameters:
a (float ) : float , This is a floating-point array representing the sequence of values (time-domain) that you want to transform. The function will perform the Fast Cosine Transform (FCT) or the inverse FCT on this input array, depending on the value of the inversefct parameter. The transformed result will also be stored in this same array, which means the function modifies the input array in-place.
tnn (int) : int, This is an integer value representing the number of data points in the input array a. It is used to determine the size of the input array and control the loops in the algorithm. Note that the size of the input array should be a power of 2 for the Fast Cosine Transform algorithm to work correctly.
inversefct (bool) : bool, This is a boolean value that controls whether the function performs the regular Fast Cosine Transform or the inverse FCT. If inversefct is set to true, the function will perform the inverse FCT, and if set to false, the regular FCT will be performed. The inverse FCT can be used to transform data back into its original form (time-domain) after the regular FCT has been applied.
Returns: float , The resulting transformed array is stored in the input array a. This means that the function modifies the input array in-place and does not return a new array.
fastconvolution(signal, signallen, response, negativelen, positivelen)
Convolution using FFT
Parameters:
signal (float ) : float , This is an array of real numbers representing the input signal that will be convolved with the response function. The elements are numbered from 0 to SignalLen-1.
signallen (int) : int, This is an integer representing the length of the input signal array. It specifies the number of elements in the signal array.
response (float ) : float , This is an array of real numbers representing the response function used for convolution. The response function consists of two parts: one corresponding to positive argument values and the other to negative argument values. Array elements with numbers from 0 to NegativeLen match the response values at points from -NegativeLen to 0, respectively. Array elements with numbers from NegativeLen+1 to NegativeLen+PositiveLen correspond to the response values in points from 1 to PositiveLen, respectively.
negativelen (int) : int, This is an integer representing the "negative length" of the response function. It indicates the number of elements in the response function array that correspond to negative argument values. Outside the range , the response function is considered zero.
positivelen (int) : int, This is an integer representing the "positive length" of the response function. It indicates the number of elements in the response function array that correspond to positive argument values. Similar to negativelen, outside the range , the response function is considered zero.
Returns: float , The resulting convolved values are stored back in the input signal array.
fastcorrelation(signal, signallen, pattern, patternlen)
Returns Correlation using FFT
Parameters:
signal (float ) : float ,This is an array of real numbers representing the signal to be correlated with the pattern. The elements are numbered from 0 to SignalLen-1.
signallen (int) : int, This is an integer representing the length of the input signal array.
pattern (float ) : float , This is an array of real numbers representing the pattern to be correlated with the signal. The elements are numbered from 0 to PatternLen-1.
patternlen (int) : int, This is an integer representing the length of the pattern array.
Returns: float , The signal array containing the correlation values at points from 0 to SignalLen-1.
tworealffts(a1, a2, a, b, tn)
Returns Fast Fourier Transform of Two Real Functions
Parameters:
a1 (float ) : float , An array of real numbers, representing the values of the first function.
a2 (float ) : float , An array of real numbers, representing the values of the second function.
a (float ) : float , An output array to store the Fourier transform of the first function.
b (float ) : float , An output array to store the Fourier transform of the second function.
tn (int) : float , An integer representing the number of function values. It must be a power of two, but the algorithm doesn't validate this condition.
Returns: float , The a and b arrays will contain the Fourier transform of the first and second functions, respectively. Note that the function overwrites the input arrays a and b.
█ Detailed explaination of each function
Fast Fourier Transform
The fastfouriertransform() function takes three input parameters:
1. a: An array of real and imaginary parts of the function values. The real part is stored at even indices, and the imaginary part is stored at odd indices.
2. nn: The number of function values. It must be a power of two, but the algorithm does not validate this.
3. inversefft: A boolean value that indicates the direction of the transformation. If True, it performs the inverse FFT; if False, it performs the direct FFT.
The function performs the FFT using the Cooley-Tukey algorithm, which is an efficient algorithm for computing the discrete Fourier transform (DFT) and its inverse. The Cooley-Tukey algorithm recursively breaks down the DFT of a sequence into smaller DFTs of subsequences, leading to a significant reduction in computational complexity. The algorithm's time complexity is O(n log n), where n is the number of samples.
The fastfouriertransform() function first initializes variables and determines the direction of the transformation based on the inversefft parameter. If inversefft is True, the isign variable is set to -1; otherwise, it is set to 1.
Next, the function performs the bit-reversal operation. This is a necessary step before calculating the FFT, as it rearranges the input data in a specific order required by the Cooley-Tukey algorithm. The bit-reversal is performed using a loop that iterates through the nn samples, swapping the data elements according to their bit-reversed index.
After the bit-reversal operation, the function iteratively computes the FFT using the Cooley-Tukey algorithm. It performs calculations in a loop that goes through different stages, doubling the size of the sub-FFT at each stage. Within each stage, the Cooley-Tukey algorithm calculates the butterfly operations, which are mathematical operations that combine the results of smaller DFTs into the final DFT. The butterfly operations involve complex number multiplication and addition, updating the input array a with the computed values.
The loop also calculates the twiddle factors, which are complex exponential factors used in the butterfly operations. The twiddle factors are calculated using trigonometric functions, such as sine and cosine, based on the angle theta. The variables wpr, wpi, wr, and wi are used to store intermediate values of the twiddle factors, which are updated in each iteration of the loop.
Finally, if the inversefft parameter is True, the function divides the result by the number of samples nn to obtain the correct inverse FFT result. This normalization step is performed using a loop that iterates through the array a and divides each element by nn.
In summary, the fastfouriertransform() function is an implementation of the Cooley-Tukey FFT algorithm, which is an efficient algorithm for computing the DFT and its inverse. This FFT library can be used for a variety of applications, such as signal processing, image processing, audio processing, and more.
Feal Fast Fourier Transform
The realfastfouriertransform() function performs a fast Fourier transform (FFT) specifically for real-valued functions. The FFT is an efficient algorithm used to compute the discrete Fourier transform (DFT) and its inverse, which are fundamental tools in signal processing, image processing, and other related fields.
This function takes three input parameters:
1. a - A float array containing the real-valued function samples.
2. tnn - The number of function values (must be a power of 2, but the algorithm does not validate this condition).
3. inversefft - A boolean flag that indicates the direction of the transformation (True for inverse, False for direct).
The function modifies the input array a in-place, meaning that the transformed data (the FFT result for direct transformation or the inverse FFT result for inverse transformation) will be stored in the same array a after the function execution.
The algorithm uses a combination of complex-to-complex FFT and additional transformations specific to real-valued data to optimize the computation. It takes into account the symmetry properties of the real-valued input data to reduce the computational complexity.
Here's a detailed walkthrough of the algorithm:
1. Depending on the inversefft flag, the initial values for ttheta, c1, and c2 are determined. These values are used for the initial data preprocessing and post-processing steps specific to the real-valued FFT.
2. The preprocessing step computes the initial real and imaginary parts of the data using a combination of sine and cosine terms with the input data. This step effectively converts the real-valued input data into complex-valued data suitable for the complex-to-complex FFT.
3. The complex-to-complex FFT is then performed on the preprocessed complex data. This involves bit-reversal reordering, followed by the Cooley-Tukey radix-2 decimation-in-time algorithm. This part of the code is similar to the fastfouriertransform() function you provided earlier.
4. After the complex-to-complex FFT, a post-processing step is performed to obtain the final real-valued output data. This involves updating the real and imaginary parts of the transformed data using sine and cosine terms, as well as the values c1 and c2.
5. Finally, if the inversefft flag is True, the output data is divided by the number of samples (nn) to obtain the inverse DFT.
The function does not return a value explicitly. Instead, the transformed data is stored in the input array a. After the function execution, you can access the transformed data in the a array, which will have the real part at even indices and the imaginary part at odd indices.
Fast Sine Transform
This code defines a function called fastsinetransform that performs a Fast Discrete Sine Transform (FST) on an array of real numbers. The function takes three input parameters:
1. a (float array): An array of real numbers representing the function values.
2. tnn (int): Number of function values (must be a power of two, but the code doesn't validate this).
3. inversefst (bool): A boolean flag indicating the direction of the transformation. If True, it performs the inverse FST, and if False, it performs the direct FST.
The output is the transformed array 'a', which will contain the result of the transformation.
The code starts by initializing several variables, including trigonometric constants for the sine transform. It then sets the first value of the array 'a' to 0 and calculates the initial values of 'y1' and 'y2', which are used to update the input array 'a' in the following loop.
The first loop (with index 'jx') iterates from 2 to (tm + 1), where 'tm' is half of the number of input samples 'tnn'. This loop is responsible for calculating the initial sine transform of the input data.
The second loop (with index 'ii') is a bit-reversal loop. It reorders the elements in the array 'a' based on the bit-reversed indices of the original order.
The third loop (with index 'ii') iterates while 'n' is greater than 'mmax', which starts at 2 and doubles each iteration. This loop performs the actual Fast Discrete Sine Transform. It calculates the sine transform using the Danielson-Lanczos lemma, which is a divide-and-conquer strategy for calculating Discrete Fourier Transforms (DFTs) efficiently.
The fourth loop (with index 'ix') is responsible for the final phase adjustments needed for the sine transform, updating the array 'a' accordingly.
The fifth loop (with index 'jj') updates the array 'a' one more time by dividing each element by 2 and calculating the sum of the even-indexed elements.
Finally, if the 'inversefst' flag is True, the code scales the transformed data by a factor of 2/tnn to get the inverse Fast Sine Transform.
In summary, the code performs a Fast Discrete Sine Transform on an input array of real numbers, either in the direct or inverse direction, and returns the transformed array. The algorithm is based on the Danielson-Lanczos lemma and uses a divide-and-conquer strategy for efficient computation.
Fast Cosine Transform
This code defines a function called fastcosinetransform that takes three parameters: a floating-point array a, an integer tnn, and a boolean inversefct. The function calculates the Fast Cosine Transform (FCT) or the inverse FCT of the input array, depending on the value of the inversefct parameter.
The Fast Cosine Transform is an algorithm that converts a sequence of values (time-domain) into a frequency domain representation. It is closely related to the Fast Fourier Transform (FFT) and can be used in various applications, such as signal processing and image compression.
Here's a detailed explanation of the code:
1. The function starts by initializing a number of variables, including counters, intermediate values, and constants.
2. The initial steps of the algorithm are performed. This includes calculating some trigonometric values and updating the input array a with the help of intermediate variables.
3. The code then enters a loop (from jx = 2 to tnn / 2). Within this loop, the algorithm computes and updates the elements of the input array a.
4. After the loop, the function prepares some variables for the next stage of the algorithm.
5. The next part of the algorithm is a series of nested loops that perform the bit-reversal permutation and apply the FCT to the input array a.
6. The code then calculates some additional trigonometric values, which are used in the next loop.
7. The following loop (from ix = 2 to tnn / 4 + 1) computes and updates the elements of the input array a using the previously calculated trigonometric values.
8. The input array a is further updated with the final calculations.
9. In the last loop (from j = 4 to tnn), the algorithm computes and updates the sum of elements in the input array a.
10. Finally, if the inversefct parameter is set to true, the function scales the input array a to obtain the inverse FCT.
The resulting transformed array is stored in the input array a. This means that the function modifies the input array in-place and does not return a new array.
Fast Convolution
This code defines a function called fastconvolution that performs the convolution of a given signal with a response function using the Fast Fourier Transform (FFT) technique. Convolution is a mathematical operation used in signal processing to combine two signals, producing a third signal representing how the shape of one signal is modified by the other.
The fastconvolution function takes the following input parameters:
1. float signal: This is an array of real numbers representing the input signal that will be convolved with the response function. The elements are numbered from 0 to SignalLen-1.
2. int signallen: This is an integer representing the length of the input signal array. It specifies the number of elements in the signal array.
3. float response: This is an array of real numbers representing the response function used for convolution. The response function consists of two parts: one corresponding to positive argument values and the other to negative argument values. Array elements with numbers from 0 to NegativeLen match the response values at points from -NegativeLen to 0, respectively. Array elements with numbers from NegativeLen+1 to NegativeLen+PositiveLen correspond to the response values in points from 1 to PositiveLen, respectively.
4. int negativelen: This is an integer representing the "negative length" of the response function. It indicates the number of elements in the response function array that correspond to negative argument values. Outside the range , the response function is considered zero.
5. int positivelen: This is an integer representing the "positive length" of the response function. It indicates the number of elements in the response function array that correspond to positive argument values. Similar to negativelen, outside the range , the response function is considered zero.
The function works by:
1. Calculating the length nl of the arrays used for FFT, ensuring it's a power of 2 and large enough to hold the signal and response.
2. Creating two new arrays, a1 and a2, of length nl and initializing them with the input signal and response function, respectively.
3. Applying the forward FFT (realfastfouriertransform) to both arrays, a1 and a2.
4. Performing element-wise multiplication of the FFT results in the frequency domain.
5. Applying the inverse FFT (realfastfouriertransform) to the multiplied results in a1.
6. Updating the original signal array with the convolution result, which is stored in the a1 array.
The result of the convolution is stored in the input signal array at the function exit.
Fast Correlation
This code defines a function called fastcorrelation that computes the correlation between a signal and a pattern using the Fast Fourier Transform (FFT) method. The function takes four input arguments and modifies the input signal array to store the correlation values.
Input arguments:
1. float signal: This is an array of real numbers representing the signal to be correlated with the pattern. The elements are numbered from 0 to SignalLen-1.
2. int signallen: This is an integer representing the length of the input signal array.
3. float pattern: This is an array of real numbers representing the pattern to be correlated with the signal. The elements are numbered from 0 to PatternLen-1.
4. int patternlen: This is an integer representing the length of the pattern array.
The function performs the following steps:
1. Calculate the required size nl for the FFT by finding the smallest power of 2 that is greater than or equal to the sum of the lengths of the signal and the pattern.
2. Create two new arrays a1 and a2 with the length nl and initialize them to 0.
3. Copy the signal array into a1 and pad it with zeros up to the length nl.
4. Copy the pattern array into a2 and pad it with zeros up to the length nl.
5. Compute the FFT of both a1 and a2.
6. Perform element-wise multiplication of the frequency-domain representation of a1 and the complex conjugate of the frequency-domain representation of a2.
7. Compute the inverse FFT of the result obtained in step 6.
8. Store the resulting correlation values in the original signal array.
At the end of the function, the signal array contains the correlation values at points from 0 to SignalLen-1.
Fast Fourier Transform of Two Real Functions
This code defines a function called tworealffts that computes the Fast Fourier Transform (FFT) of two real-valued functions (a1 and a2) using a Cooley-Tukey-based radix-2 Decimation in Time (DIT) algorithm. The FFT is a widely used algorithm for computing the discrete Fourier transform (DFT) and its inverse.
Input parameters:
1. float a1: an array of real numbers, representing the values of the first function.
2. float a2: an array of real numbers, representing the values of the second function.
3. float a: an output array to store the Fourier transform of the first function.
4. float b: an output array to store the Fourier transform of the second function.
5. int tn: an integer representing the number of function values. It must be a power of two, but the algorithm doesn't validate this condition.
The function performs the following steps:
1. Combine the two input arrays, a1 and a2, into a single array a by interleaving their elements.
2. Perform a 1D FFT on the combined array a using the radix-2 DIT algorithm.
3. Separate the FFT results of the two input functions from the combined array a and store them in output arrays a and b.
Here is a detailed breakdown of the radix-2 DIT algorithm used in this code:
1. Bit-reverse the order of the elements in the combined array a.
2. Initialize the loop variables mmax, istep, and theta.
3. Enter the main loop that iterates through different stages of the FFT.
a. Compute the sine and cosine values for the current stage using the theta variable.
b. Initialize the loop variables wr and wi for the current stage.
c. Enter the inner loop that iterates through the butterfly operations within each stage.
i. Perform the butterfly operation on the elements of array a.
ii. Update the loop variables wr and wi for the next butterfly operation.
d. Update the loop variables mmax, istep, and theta for the next stage.
4. Separate the FFT results of the two input functions from the combined array a and store them in output arrays a and b.
At the end of the function, the a and b arrays will contain the Fourier transform of the first and second functions, respectively. Note that the function overwrites the input arrays a and b.
█ Example scripts using functions contained in loxxfft
Real-Fast Fourier Transform of Price w/ Linear Regression
Real-Fast Fourier Transform of Price Oscillator
Normalized, Variety, Fast Fourier Transform Explorer
Variety RSI of Fast Discrete Cosine Transform
STD-Stepped Fast Cosine Transform Moving Average
Stochastic Enhanced [DCAUT]█ Stochastic Enhanced
📊 ORIGINALITY & INNOVATION
The Stochastic Enhanced indicator builds upon George Lane's classic momentum oscillator (developed in the late 1950s) by providing comprehensive smoothing algorithm flexibility. While traditional implementations limit users to Simple Moving Average (SMA) smoothing, this enhanced version offers 21 advanced smoothing algorithms, allowing traders to optimize the indicator's characteristics for different market conditions and trading styles.
Key Improvements:
Extended from single SMA smoothing to 21 professional-grade algorithms including adaptive filters (KAMA, FRAMA), zero-lag methods (ZLEMA, T3), and advanced digital filters (Kalman, Laguerre)
Maintains backward compatibility with traditional Stochastic calculations through SMA default setting
Unified smoothing algorithm applies to both %K and %D lines for consistent signal processing characteristics
Enhanced visual feedback with clear color distinction and background fill highlighting for intuitive signal recognition
Comprehensive alert system covering crossovers and zone entries for systematic trade management
Differentiation from Traditional Stochastic:
Traditional Stochastic indicators use fixed SMA smoothing, which introduces consistent lag regardless of market volatility. This enhanced version addresses the limitation by offering adaptive algorithms that adjust to market conditions (KAMA, FRAMA), reduce lag without sacrificing smoothness (ZLEMA, T3, HMA), or provide superior noise filtering (Kalman Filter, Laguerre filters). The flexibility helps traders balance responsiveness and stability according to their specific needs.
📐 MATHEMATICAL FOUNDATION
Core Stochastic Calculation:
The Stochastic Oscillator measures the position of the current close relative to the high-low range over a specified period:
Step 1: Raw %K Calculation
%K_raw = 100 × (Close - Lowest Low) / (Highest High - Lowest Low)
Where:
Close = Current closing price
Lowest Low = Lowest low over the %K Length period
Highest High = Highest high over the %K Length period
Result ranges from 0 (close at period low) to 100 (close at period high)
Step 2: Smoothed %K Calculation
%K = MA(%K_raw, K Smoothing Period, MA Type)
Where:
MA = Selected moving average algorithm (SMA, EMA, etc.)
K Smoothing = 1 for Fast Stochastic, 3+ for Slow Stochastic
Traditional Fast Stochastic uses %K_raw directly without smoothing
Step 3: Signal Line %D Calculation
%D = MA(%K, D Smoothing Period, MA Type)
Where:
%D acts as a signal line and moving average of %K
D Smoothing typically set to 3 periods in traditional implementations
Both %K and %D use the same MA algorithm for consistent behavior
Available Smoothing Algorithms (21 Options):
Standard Moving Averages:
SMA (Simple): Equal-weighted average, traditional default, consistent lag characteristics
EMA (Exponential): Recent price emphasis, faster response to changes, exponential decay weighting
RMA (Rolling/Wilder's): Smoothed average used in RSI, less reactive than EMA
WMA (Weighted): Linear weighting favoring recent data, moderate responsiveness
VWMA (Volume-Weighted): Incorporates volume data, reflects market participation intensity
Advanced Moving Averages:
HMA (Hull): Reduced lag with smoothness, uses weighted moving averages and square root period
ALMA (Arnaud Legoux): Gaussian distribution weighting, minimal lag with good noise reduction
LSMA (Least Squares): Linear regression based, fits trend line to data points
DEMA (Double Exponential): Reduced lag compared to EMA, uses double smoothing technique
TEMA (Triple Exponential): Further lag reduction, triple smoothing with lag compensation
ZLEMA (Zero-Lag Exponential): Lag elimination attempt using error correction, very responsive
TMA (Triangular): Double-smoothed SMA, very smooth but slower response
Adaptive & Intelligent Filters:
T3 (Tilson T3): Six-pass exponential smoothing with volume factor adjustment, excellent smoothness
FRAMA (Fractal Adaptive): Adapts to market fractal dimension, faster in trends, slower in ranges
KAMA (Kaufman Adaptive): Efficiency ratio based adaptation, responds to volatility changes
McGinley Dynamic: Self-adjusting mechanism following price more accurately, reduced whipsaws
Kalman Filter: Optimal estimation algorithm from aerospace engineering, dynamic noise filtering
Advanced Digital Filters:
Ultimate Smoother: Advanced digital filter design, superior noise rejection with minimal lag
Laguerre Filter: Time-domain filter with N-order implementation, adjustable lag characteristics
Laguerre Binomial Filter: 6-pole Laguerre filter, extremely smooth output for long-term analysis
Super Smoother: Butterworth filter implementation, removes high-frequency noise effectively
📊 COMPREHENSIVE SIGNAL ANALYSIS
Absolute Level Interpretation (%K Line):
%K Above 80: Overbought condition, price near period high, potential reversal or pullback zone, caution for new long entries
%K in 70-80 Range: Strong upward momentum, bullish trend confirmation, uptrend likely continuing
%K in 50-70 Range: Moderate bullish momentum, neutral to positive outlook, consolidation or mild uptrend
%K in 30-50 Range: Moderate bearish momentum, neutral to negative outlook, consolidation or mild downtrend
%K in 20-30 Range: Strong downward momentum, bearish trend confirmation, downtrend likely continuing
%K Below 20: Oversold condition, price near period low, potential bounce or reversal zone, caution for new short entries
Crossover Signal Analysis:
%K Crosses Above %D (Bullish Cross): Momentum shifting bullish, faster line overtakes slower signal, consider long entry especially in oversold zone, strongest when occurring below 20 level
%K Crosses Below %D (Bearish Cross): Momentum shifting bearish, faster line falls below slower signal, consider short entry especially in overbought zone, strongest when occurring above 80 level
Crossover in Midrange (40-60): Less reliable signals, often in choppy sideways markets, require additional confirmation from trend or volume analysis
Multiple Failed Crosses: Indicates ranging market or choppy conditions, reduce position sizes or avoid trading until clear directional move
Advanced Divergence Patterns (%K Line vs Price):
Bullish Divergence: Price makes lower low while %K makes higher low, indicates weakening bearish momentum, potential trend reversal upward, more reliable when %K in oversold zone
Bearish Divergence: Price makes higher high while %K makes lower high, indicates weakening bullish momentum, potential trend reversal downward, more reliable when %K in overbought zone
Hidden Bullish Divergence: Price makes higher low while %K makes lower low, indicates trend continuation in uptrend, bullish trend strength confirmation
Hidden Bearish Divergence: Price makes lower high while %K makes higher high, indicates trend continuation in downtrend, bearish trend strength confirmation
Momentum Strength Analysis (%K Line Slope):
Steep %K Slope: Rapid momentum change, strong directional conviction, potential for extended moves but also increased reversal risk
Gradual %K Slope: Steady momentum development, sustainable trends more likely, lower probability of sharp reversals
Flat or Horizontal %K: Momentum stalling, potential reversal or consolidation ahead, wait for directional break before committing
%K Oscillation Within Range: Indicates ranging market, sideways price action, better suited for range-trading strategies than trend following
🎯 STRATEGIC APPLICATIONS
Mean Reversion Strategy (Range-Bound Markets):
Identify ranging market conditions using price action or Bollinger Bands
Wait for Stochastic to reach extreme zones (above 80 for overbought, below 20 for oversold)
Enter counter-trend position when %K crosses %D in extreme zone (sell on bearish cross above 80, buy on bullish cross below 20)
Set profit targets near opposite extreme or midline (50 level)
Use tight stop-loss above recent swing high/low to protect against breakout scenarios
Exit when Stochastic reaches opposite extreme or %K crosses %D in opposite direction
Trend Following with Momentum Confirmation:
Identify primary trend direction using higher timeframe analysis or moving averages
Wait for Stochastic pullback to oversold zone (<20) in uptrend or overbought zone (>80) in downtrend
Enter in trend direction when %K crosses %D confirming momentum shift (bullish cross in uptrend, bearish cross in downtrend)
Use wider stops to accommodate normal trend volatility
Add to position on subsequent pullbacks showing similar Stochastic pattern
Exit when Stochastic shows opposite extreme with failed cross or bearish/bullish divergence
Divergence-Based Reversal Strategy:
Scan for divergence between price and Stochastic at swing highs/lows
Confirm divergence with at least two price pivots showing divergent Stochastic readings
Wait for %K to cross %D in direction of anticipated reversal as entry trigger
Enter position in divergence direction with stop beyond recent swing extreme
Target profit at key support/resistance levels or Fibonacci retracements
Scale out as Stochastic reaches opposite extreme zone
Multi-Timeframe Momentum Alignment:
Analyze Stochastic on higher timeframe (4H or Daily) for primary trend bias
Switch to lower timeframe (1H or 15M) for precise entry timing
Only take trades where lower timeframe Stochastic signal aligns with higher timeframe momentum direction
Higher timeframe Stochastic in bullish zone (>50) = only take long entries on lower timeframe
Higher timeframe Stochastic in bearish zone (<50) = only take short entries on lower timeframe
Exit when lower timeframe shows counter-signal or higher timeframe momentum reverses
Zone Transition Strategy:
Monitor Stochastic for transitions between zones (oversold to neutral, neutral to overbought, etc.)
Enter long when Stochastic crosses above 20 (exiting oversold), signaling momentum shift from bearish to neutral/bullish
Enter short when Stochastic crosses below 80 (exiting overbought), signaling momentum shift from bullish to neutral/bearish
Use zone midpoint (50) as dynamic support/resistance for position management
Trail stops as Stochastic advances through favorable zones
Exit when Stochastic fails to maintain momentum and reverses back into prior zone
📋 DETAILED PARAMETER CONFIGURATION
%K Length (Default: 14):
Lower Values (5-9): Highly sensitive to price changes, generates more frequent signals, increased false signals in choppy markets, suitable for very short-term trading and scalping
Standard Values (10-14): Balanced sensitivity and reliability, traditional default (14) widely used,适合 swing trading and intraday strategies
Higher Values (15-21): Reduced sensitivity, smoother oscillations, fewer but potentially more reliable signals, better for position trading and lower timeframe noise reduction
Very High Values (21+): Slow response, long-term momentum measurement, fewer trading signals, suitable for weekly or monthly analysis
%K Smoothing (Default: 3):
Value 1: Fast Stochastic, uses raw %K calculation without additional smoothing, most responsive to price changes, generates earliest signals with higher noise
Value 3: Slow Stochastic (default), traditional smoothing level, reduces false signals while maintaining good responsiveness, widely accepted standard
Values 5-7: Very slow response, extremely smooth oscillations, significantly reduced whipsaws but delayed entry/exit timing
Recommendation: Default value 3 suits most trading scenarios, active short-term traders may use 1, conservative long-term positions use 5+
%D Smoothing (Default: 3):
Lower Values (1-2): Signal line closely follows %K, frequent crossover signals, useful for active trading but requires strict filtering
Standard Value (3): Traditional setting providing balanced signal line behavior, optimal for most trading applications
Higher Values (4-7): Smoother signal line, fewer crossover signals, reduced whipsaws but slower confirmation, better for trend trading
Very High Values (8+): Signal line becomes slow-moving reference, crossovers rare and highly significant, suitable for long-term position changes only
Smoothing Type Algorithm Selection:
For Trending Markets:
ZLEMA, DEMA, TEMA: Reduced lag for faster trend entry, quick response to momentum shifts, suitable for strong directional moves
HMA, ALMA: Good balance of smoothness and responsiveness, effective for clean trend following without excessive noise
EMA: Classic choice for trending markets, faster than SMA while maintaining reasonable stability
For Ranging/Choppy Markets:
Kalman Filter, Super Smoother: Superior noise filtering, reduces false signals in sideways action, helps identify genuine reversal points
Laguerre Filters: Smooth oscillations with adjustable lag, excellent for mean reversion strategies in ranges
T3, TMA: Very smooth output, filters out market noise effectively, clearer extreme zone identification
For Adaptive Market Conditions:
KAMA: Automatically adjusts to market efficiency, fast in trends and slow in congestion, reduces whipsaws during transitions
FRAMA: Adapts to fractal market structure, responsive during directional moves, conservative during uncertainty
McGinley Dynamic: Self-adjusting smoothing, follows price naturally, minimizes lag in trending markets while filtering noise in ranges
For Conservative Long-Term Analysis:
SMA: Traditional choice, predictable behavior, widely understood characteristics
RMA (Wilder's): Smooth oscillations, reduced sensitivity to outliers, consistent behavior across market conditions
Laguerre Binomial Filter: Extremely smooth output, ideal for weekly/monthly timeframe analysis, eliminates short-term noise completely
Source Selection:
Close (Default): Standard choice using closing prices, most common and widely tested
HLC3 or OHLC4: Incorporates more price information, reduces impact of sudden spikes or gaps, smoother oscillator behavior
HL2: Midpoint of high-low range, emphasizes intrabar volatility, useful for markets with wide intraday ranges
Custom Source: Can use other indicators as input (e.g., Heikin Ashi close, smoothed price), creates derivative momentum indicators
📈 PERFORMANCE ANALYSIS & COMPETITIVE ADVANTAGES
Responsiveness Characteristics:
Traditional SMA-Based Stochastic:
Fixed lag regardless of market conditions, consistent delay of approximately (K Smoothing + D Smoothing) / 2 periods
Equal treatment of trending and ranging markets, no adaptation to volatility changes
Predictable behavior but suboptimal in varying market regimes
Enhanced Version with Adaptive Algorithms:
KAMA and FRAMA reduce lag by up to 40-60% in strong trends compared to SMA while maintaining similar smoothness in ranges
ZLEMA and T3 provide near-zero lag characteristics for early entry signals with acceptable noise levels
Kalman Filter and Super Smoother offer superior noise rejection, reducing false signals in choppy conditions by estimations of 30-50% compared to SMA
Performance improvements vary by algorithm selection and market conditions
Signal Quality Improvements:
Adaptive algorithms help reduce whipsaw trades in ranging markets by adjusting sensitivity dynamically
Advanced filters (Kalman, Laguerre, Super Smoother) provide clearer extreme zone readings for mean reversion strategies
Zero-lag methods (ZLEMA, DEMA, TEMA) generate earlier crossover signals in trending markets for improved entry timing
Smoother algorithms (T3, Laguerre Binomial) reduce false extreme zone touches for more reliable overbought/oversold signals
Comparison with Standard Implementations:
Versus Basic Stochastic: Enhanced version offers 21 smoothing options versus single SMA, allowing optimization for specific market characteristics and trading styles
Versus RSI: Stochastic provides range-bound measurement (0-100) with clear extreme zones, RSI measures momentum speed, Stochastic offers clearer visual overbought/oversold identification
Versus MACD: Stochastic bounded oscillator suitable for mean reversion, MACD unbounded indicator better for trend strength, Stochastic excels in range-bound and oscillating markets
Versus CCI: Stochastic has fixed bounds (0-100) for consistent interpretation, CCI unbounded with variable extremes, Stochastic provides more standardized extreme readings across different instruments
Flexibility Advantages:
Single indicator adaptable to multiple strategies through algorithm selection rather than requiring different indicator variants
Ability to optimize smoothing characteristics for specific instruments (e.g., smoother for crypto volatility, faster for forex trends)
Multi-timeframe analysis with consistent algorithm across timeframes for coherent momentum picture
Backtesting capability with algorithm as optimization parameter for strategy development
Limitations and Considerations:
Increased complexity from multiple algorithm choices may lead to over-optimization if parameters are curve-fitted to historical data
Adaptive algorithms (KAMA, FRAMA) have adjustment periods during market regime changes where signals may be less reliable
Zero-lag algorithms sacrifice some smoothness for responsiveness, potentially increasing noise sensitivity in very choppy conditions
Performance characteristics vary significantly across algorithms, requiring understanding and testing before live implementation
Like all oscillators, Stochastic can remain in extreme zones for extended periods during strong trends, generating premature reversal signals
USAGE NOTES
This indicator is designed for technical analysis and educational purposes to provide traders with enhanced flexibility in momentum analysis. The Stochastic Oscillator has limitations and should not be used as the sole basis for trading decisions.
Important Considerations:
Algorithm performance varies with market conditions - no single smoothing method is optimal for all scenarios
Extreme zone signals (overbought/oversold) indicate potential reversal areas but not guaranteed turning points, especially in strong trends
Crossover signals may generate false entries during sideways choppy markets regardless of smoothing algorithm
Divergence patterns require confirmation from price action or additional indicators before trading
Past indicator characteristics and backtested results do not guarantee future performance
Always combine Stochastic analysis with proper risk management, position sizing, and multi-indicator confirmation
Test selected algorithm on historical data of specific instrument and timeframe before live trading
Market regime changes may require algorithm adjustment for optimal performance
The enhanced smoothing options are intended to provide tools for optimizing the indicator's behavior to match individual trading styles and market characteristics, not to create a perfect predictive tool. Responsible usage includes understanding the mathematical properties of selected algorithms and their appropriate application contexts.
Optimized Grid with KNN_2.0Strategy Overview
This strategy, named "Optimized Grid with KNN_2.0," is designed to optimize trading decisions using a combination of grid trading, K-Nearest Neighbors (KNN) algorithm, and a greedy algorithm. The strategy aims to maximize profits by dynamically adjusting entry and exit thresholds based on market conditions and historical data.
Key Components
Grid Trading:
The strategy uses a grid-based approach to place buy and sell orders at predefined price levels. This helps in capturing profits from market fluctuations.
K-Nearest Neighbors (KNN) Algorithm:
The KNN algorithm is used to optimize entry and exit points based on historical price data. It identifies the nearest neighbors (similar price movements) and adjusts the thresholds accordingly.
Greedy Algorithm:
The greedy algorithm is employed to dynamically adjust the stop-loss and take-profit levels. It ensures that the strategy captures maximum profits by adjusting thresholds based on recent price changes.
Detailed Explanation
Grid Trading:
The strategy defines a grid of price levels where buy and sell orders are placed. The openTh and closeTh parameters determine the thresholds for opening and closing positions.
The t3_fast and t3_slow indicators are used to generate trading signals based on the crossover and crossunder of these indicators.
KNN Algorithm:
The KNN algorithm is used to find the nearest neighbors (similar price movements) in the historical data. It calculates the distance between the current price and historical prices to identify the most similar price movements.
The algorithm then adjusts the entry and exit thresholds based on the average change in price of the nearest neighbors.
Greedy Algorithm:
The greedy algorithm dynamically adjusts the stop-loss and take-profit levels based on recent price changes. It ensures that the strategy captures maximum profits by adjusting thresholds in real-time.
The algorithm uses the average_change variable to calculate the average price change of the nearest neighbors and adjusts the thresholds accordingly.
Machine Learning: Lorentzian Classification█ OVERVIEW
A Lorentzian Distance Classifier (LDC) is a Machine Learning classification algorithm capable of categorizing historical data from a multi-dimensional feature space. This indicator demonstrates how Lorentzian Classification can also be used to predict the direction of future price movements when used as the distance metric for a novel implementation of an Approximate Nearest Neighbors (ANN) algorithm.
█ BACKGROUND
In physics, Lorentzian space is perhaps best known for its role in describing the curvature of space-time in Einstein's theory of General Relativity (2). Interestingly, however, this abstract concept from theoretical physics also has tangible real-world applications in trading.
Recently, it was hypothesized that Lorentzian space was also well-suited for analyzing time-series data (4), (5). This hypothesis has been supported by several empirical studies that demonstrate that Lorentzian distance is more robust to outliers and noise than the more commonly used Euclidean distance (1), (3), (6). Furthermore, Lorentzian distance was also shown to outperform dozens of other highly regarded distance metrics, including Manhattan distance, Bhattacharyya similarity, and Cosine similarity (1), (3). Outside of Dynamic Time Warping based approaches, which are unfortunately too computationally intensive for PineScript at this time, the Lorentzian Distance metric consistently scores the highest mean accuracy over a wide variety of time series data sets (1).
Euclidean distance is commonly used as the default distance metric for NN-based search algorithms, but it may not always be the best choice when dealing with financial market data. This is because financial market data can be significantly impacted by proximity to major world events such as FOMC Meetings and Black Swan events. This event-based distortion of market data can be framed as similar to the gravitational warping caused by a massive object on the space-time continuum. For financial markets, the analogous continuum that experiences warping can be referred to as "price-time".
Below is a side-by-side comparison of how neighborhoods of similar historical points appear in three-dimensional Euclidean Space and Lorentzian Space:
This figure demonstrates how Lorentzian space can better accommodate the warping of price-time since the Lorentzian distance function compresses the Euclidean neighborhood in such a way that the new neighborhood distribution in Lorentzian space tends to cluster around each of the major feature axes in addition to the origin itself. This means that, even though some nearest neighbors will be the same regardless of the distance metric used, Lorentzian space will also allow for the consideration of historical points that would otherwise never be considered with a Euclidean distance metric.
Intuitively, the advantage inherent in the Lorentzian distance metric makes sense. For example, it is logical that the price action that occurs in the hours after Chairman Powell finishes delivering a speech would resemble at least some of the previous times when he finished delivering a speech. This may be true regardless of other factors, such as whether or not the market was overbought or oversold at the time or if the macro conditions were more bullish or bearish overall. These historical reference points are extremely valuable for predictive models, yet the Euclidean distance metric would miss these neighbors entirely, often in favor of irrelevant data points from the day before the event. By using Lorentzian distance as a metric, the ML model is instead able to consider the warping of price-time caused by the event and, ultimately, transcend the temporal bias imposed on it by the time series.
For more information on the implementation details of the Approximate Nearest Neighbors (ANN) algorithm used in this indicator, please refer to the detailed comments in the source code.
█ HOW TO USE
Below is an explanatory breakdown of the different parts of this indicator as it appears in the interface:
Below is an explanation of the different settings for this indicator:
General Settings:
Source - This has a default value of "hlc3" and is used to control the input data source.
Neighbors Count - This has a default value of 8, a minimum value of 1, a maximum value of 100, and a step of 1. It is used to control the number of neighbors to consider.
Max Bars Back - This has a default value of 2000.
Feature Count - This has a default value of 5, a minimum value of 2, and a maximum value of 5. It controls the number of features to use for ML predictions.
Color Compression - This has a default value of 1, a minimum value of 1, and a maximum value of 10. It is used to control the compression factor for adjusting the intensity of the color scale.
Show Exits - This has a default value of false. It controls whether to show the exit threshold on the chart.
Use Dynamic Exits - This has a default value of false. It is used to control whether to attempt to let profits ride by dynamically adjusting the exit threshold based on kernel regression.
Feature Engineering Settings:
Note: The Feature Engineering section is for fine-tuning the features used for ML predictions. The default values are optimized for the 4H to 12H timeframes for most charts, but they should also work reasonably well for other timeframes. By default, the model can support features that accept two parameters (Parameter A and Parameter B, respectively). Even though there are only 4 features provided by default, the same feature with different settings counts as two separate features. If the feature only accepts one parameter, then the second parameter will default to EMA-based smoothing with a default value of 1. These features represent the most effective combination I have encountered in my testing, but additional features may be added as additional options in the future.
Feature 1 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 2 - This has a default value of "WT" and options are: "RSI", "WT", "CCI", "ADX".
Feature 3 - This has a default value of "CCI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 4 - This has a default value of "ADX" and options are: "RSI", "WT", "CCI", "ADX".
Feature 5 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Filters Settings:
Use Volatility Filter - This has a default value of true. It is used to control whether to use the volatility filter.
Use Regime Filter - This has a default value of true. It is used to control whether to use the trend detection filter.
Use ADX Filter - This has a default value of false. It is used to control whether to use the ADX filter.
Regime Threshold - This has a default value of -0.1, a minimum value of -10, a maximum value of 10, and a step of 0.1. It is used to control the Regime Detection filter for detecting Trending/Ranging markets.
ADX Threshold - This has a default value of 20, a minimum value of 0, a maximum value of 100, and a step of 1. It is used to control the threshold for detecting Trending/Ranging markets.
Kernel Regression Settings:
Trade with Kernel - This has a default value of true. It is used to control whether to trade with the kernel.
Show Kernel Estimate - This has a default value of true. It is used to control whether to show the kernel estimate.
Lookback Window - This has a default value of 8 and a minimum value of 3. It is used to control the number of bars used for the estimation. Recommended range: 3-50
Relative Weighting - This has a default value of 8 and a step size of 0.25. It is used to control the relative weighting of time frames. Recommended range: 0.25-25
Start Regression at Bar - This has a default value of 25. It is used to control the bar index on which to start regression. Recommended range: 0-25
Display Settings:
Show Bar Colors - This has a default value of true. It is used to control whether to show the bar colors.
Show Bar Prediction Values - This has a default value of true. It controls whether to show the ML model's evaluation of each bar as an integer.
Use ATR Offset - This has a default value of false. It controls whether to use the ATR offset instead of the bar prediction offset.
Bar Prediction Offset - This has a default value of 0 and a minimum value of 0. It is used to control the offset of the bar predictions as a percentage from the bar high or close.
Backtesting Settings:
Show Backtest Results - This has a default value of true. It is used to control whether to display the win rate of the given configuration.
█ WORKS CITED
(1) R. Giusti and G. E. A. P. A. Batista, "An Empirical Comparison of Dissimilarity Measures for Time Series Classification," 2013 Brazilian Conference on Intelligent Systems, Oct. 2013, DOI: 10.1109/bracis.2013.22.
(2) Y. Kerimbekov, H. Ş. Bilge, and H. H. Uğurlu, "The use of Lorentzian distance metric in classification problems," Pattern Recognition Letters, vol. 84, 170–176, Dec. 2016, DOI: 10.1016/j.patrec.2016.09.006.
(3) A. Bagnall, A. Bostrom, J. Large, and J. Lines, "The Great Time Series Classification Bake Off: An Experimental Evaluation of Recently Proposed Algorithms." ResearchGate, Feb. 04, 2016.
(4) H. Ş. Bilge, Yerzhan Kerimbekov, and Hasan Hüseyin Uğurlu, "A new classification method by using Lorentzian distance metric," ResearchGate, Sep. 02, 2015.
(5) Y. Kerimbekov and H. Şakir Bilge, "Lorentzian Distance Classifier for Multiple Features," Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods, 2017, DOI: 10.5220/0006197004930501.
(6) V. Surya Prasath et al., "Effects of Distance Measure Choice on KNN Classifier Performance - A Review." .
█ ACKNOWLEDGEMENTS
@veryfid - For many invaluable insights, discussions, and advice that helped to shape this project.
@capissimo - For open sourcing his interesting ideas regarding various KNN implementations in PineScript, several of which helped inspire my original undertaking of this project.
@RikkiTavi - For many invaluable physics-related conversations and for his helping me develop a mechanism for visualizing various distance algorithms in 3D using JavaScript
@jlaurel - For invaluable literature recommendations that helped me to understand the underlying subject matter of this project.
@annutara - For help in beta-testing this indicator and for sharing many helpful ideas and insights early on in its development.
@jasontaylor7 - For helping to beta-test this indicator and for many helpful conversations that helped to shape my backtesting workflow
@meddymarkusvanhala - For helping to beta-test this indicator
@dlbnext - For incredibly detailed backtesting testing of this indicator and for sharing numerous ideas on how the user experience could be improved.
Adaptive MA constructor [lastguru]Adaptive Moving Averages are nothing new, however most of them use EMA as their MA of choice once the preferred smoothing length is determined. I have decided to make an experiment and separate length generation from smoothing, offering multiple alternatives to be combined. Some of the combinations are widely known, some are not. This indicator is based on my previously published public libraries and also serve as a usage demonstration for them. I will try to expand the collection (suggestions are welcome), however it is not meant as an encyclopaedic resource, so you are encouraged to experiment yourself: by looking on the source code of this indicator, I am sure you will see how trivial it is to use the provided libraries and expand them with your own ideas and combinations. I give no recommendation on what settings to use, but if you find some useful setting, combination or application ideas (or bugs in my code), I would be happy to read about them in the comments section.
The indicator works in three stages: Prefiltering, Length Adaptation and Moving Averages.
Prefiltering is a fast smoothing to get rid of high-frequency (2, 3 or 4 bar) noise.
Adaptation algorithms are roughly subdivided in two categories: classic Length Adaptations and Cycle Estimators (they are also implemented in separate libraries), all are selected in Adaptation dropdown. Length Adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle Estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly).
Chande (Price) - based on Chande's Dynamic Momentum Index (CDMI or DYMOI), which is dynamic RSI with this length
Chande (Volume) - a variant of Chande's algorithm, where volume is used instead of price
VIDYA - based on VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
VIDYA-RS - based on Vitali Apirine's modification of VIDYA algorithm (he calls it Relative Strength Moving Average). The period oscillates from the Upper Bound down (fast)
Kaufman Efficiency Scaling - based on Efficiency Ratio calculation originally used in KAMA
Deviation Scaling - based on DSSS by John F. Ehlers
Median Average - based on Median Average Adaptive Filter by John F. Ehlers
Fractal Adaptation - based on FRAMA by John F. Ehlers
MESA MAMA Alpha - based on MESA Adaptive Moving Average by John F. Ehlers
MESA MAMA Cycle - based on MESA Adaptive Moving Average by John F. Ehlers, but unlike Alpha calculation, this adaptation estimates cycle period
Pearson Autocorrelation* - based on Pearson Autocorrelation Periodogram by John F. Ehlers
DFT Cycle* - based on Discrete Fourier Transform Spectrum estimator by John F. Ehlers
Phase Accumulation* - based on Dominant Cycle from Phase Accumulation by John F. Ehlers
Length Adaptation usually take two parameters: Bound From (lower bound) and To (upper bound). These are the limits for Adaptation values. Note that the Cycle Estimators marked with asterisks(*) are very computationally intensive, so the bounds should not be set much higher than 50, otherwise you may receive a timeout error (also, it does not seem to be a useful thing to do, but you may correct me if I'm wrong).
The Cycle Estimators marked with asterisks(*) also have 3 checkboxes: HP (Highpass Filter), SS (Super Smoother) and HW (Hann Window). These enable or disable their internal prefilters, which are recommended by their author - John F. Ehlers. I do not know, which combination works best, so you can experiment.
Chande's Adaptations also have 3 additional parameters: SD Length (lookback length of Standard deviation), Smooth (smoothing length of Standard deviation) and Power (exponent of the length adaptation - lower is smaller variation). These are internal tweaks for the calculation.
Length Adaptaton section offer you a choice of Moving Average algorithms. Most of the Adaptations are originally used with EMA, so this is a good starting point for exploration.
SMA - Simple Moving Average
RMA - Running Moving Average
EMA - Exponential Moving Average
HMA - Hull Moving Average
VWMA - Volume Weighted Moving Average
2-pole Super Smoother - 2-pole Super Smoother by John F. Ehlers
3-pole Super Smoother - 3-pole Super Smoother by John F. Ehlers
Filt11 -a variant of 2-pole Super Smoother with error averaging for zero-lag response by John F. Ehlers
Triangle Window - Triangle Window Filter by John F. Ehlers
Hamming Window - Hamming Window Filter by John F. Ehlers
Hann Window - Hann Window Filter by John F. Ehlers
Lowpass - removes cyclic components shorter than length (Price - Highpass)
DSSS - Derivation Scaled Super Smoother by John F. Ehlers
There are two Moving Averages that are drown on the chart, so length for both needs to be selected. If no Adaptation is selected ( None option), you can set Fast Length and Slow Length directly. If an Adaptation is selected, then Cycle multiplier can be selected for Fast and Slow MA.
More information on the algorithms is given in the code for the libraries used. I am also very grateful to other TradingView community members (they are also mentioned in the library code) without whom this script would not have been possible.




















