The script can be recreated with this library. Unlike the original script, that used multiple arrays, this has been reworked with the new Pine Script matrix features.
To make a kNN prediction, the following data should be supplied to the wrapper:
kNN: filter type. Right now either Binary or Percent. Binary works like in the original script: the system stores whether the price has increased (+1) or decreased (-1) since the previous knnStore event (called when either long or short condition is supplied). Percent works the same, but the values stored are the difference of prices in percents. That way larger differences in prices would give higher scores.
k: number k. This is how many nearest neighbors are to be selected (and summed up to get the result).
skew: kNN minimum difference. Normally, the prediction is done with a simple majority of the neighbor votes. If skew is given, then more than a simple majority is needed for a prediction. This also means that there are inputs for which no prediction would be given (if the majority votes are between -skew and +skew). Note that in Percent mode more profitable trades will have higher voting power.
depth: kNN matrix size limit. Originally, the whole available history of trades was used to make a prediction. This not only requires more computational power, but also neglects the fact that the market conditions are changing. This setting restricts the memory matrix to a finite number of past trades.
price: price series
long: long condition. True if the long conditions are met, but filters are not yet applied. For example, in my original script, trades are only made on crossings of fast and slow MAs. So, whenever it is possible to go long, this value is set true. False otherwise.
short: short condition. Same as long, but for short condition.
store: whether the inputs should be stored. Additional filters may be applied to prevent bad trades (for example, trend-based filters), so if you only need to consult kNN without storing the trade, this should be set to false.
feature1: current value of feature 1. A feature in this case is some kind of data derived from the price. Different features may be used to analyse the price series. For example, oscillator values. Not all of them may be used for kNN prediction. As the current kNN implementation is 2-dimensional, only two features can be used.
feature2: current value of feature 2.
The wrapper returns a tuple: [[]longOK, shortOK[]]. This is a pair of filters. When longOK is true, then kNN predicts a long trade may be taken. When shortOK is true, then kNN predicts a short trade may be taken. The kNN filters are returned whenever long or short conditions are met. The trade is supposed to happen when long or short conditions are met and when the kNN filter for the desired direction is true.
Exported functions:
knnStore(knn, p1, p2, src, maxrows) Store the previous trade; buffer the current one until results are in. Results are binary: up/down Parameters: knn: knn matrix p1: feature 1 value p2: feature 2 value src: current price maxrows: limit the matrix size to this number of rows (0 of no limit) Returns: modified knn matrix
knnStorePercent(knn, p1, p2, src, maxrows) Store the previous trade; buffer the current one until results are in. Results are in percents Parameters: knn: knn matrix p1: feature 1 value p2: feature 2 value src: current price maxrows: limit the matrix size to this number of rows (0 of no limit) Returns: modified knn matrix
knnGet(distance, result) Get neighbours by getting k results with the smallest distances Parameters: distance: distance array result: result array Returns: array slice of k results
knnDistance(knn, p1, p2) Create a distance array from the two given parameters Parameters: knn: knn matrix p1: feature 1 value p2: feature 2 value Returns: distance array
knnSum(knn, p1, p2, k) Make a prediction, finding k nearest neighbours and summing them up Parameters: knn: knn matrix p1: feature 1 value p2: feature 2 value k: sum k nearest neighbors Returns: sum of k nearest neighbors
doKNN(kNN, k, skew, depth, price, long, short, store, feature1, feature2) execute kNN filter Parameters: kNN: filter type k: number k skew: kNN minimum difference depth: kNN matrix size limit price: series long: long condition short: short condition store: store the supplied features (if false, only checks the results without storage) feature1: feature 1 value feature2: feature 2 value Returns: [longOK, shortOK] filter output
After diving more into this code and even if I have not tested this yet I think that the [CODE] knnGet [/CODE] function needs a fix. [CODE] out = array.slice(r, 0, math.min(k, array.size(r)-1)) [/CODE] should be: [CODE] out = array.slice(r, 0, math.min((k -1), array.size(r)-1)) [/CODE] .
Not that the results would differ too much if the array is big enough.
It's just that it makes sense that the index should be the number you want minus one as you do with the array.size .
ruckard
⋅
3) The third problem is that the old script and the new script, even if I try them to work the same way... by adding that (0,0,0) array... well... they don't give the same advice.
I'll try to figure out what it's actual difference in the code that makes the new code and the old code to give different pieces of advice even if they have the same matrix/array contents. After all it might be a problem that I have added myself while trying to make both of the scripts to work as similar as possible.
I couldn't trust your new library without rewriting your old script using it... you know... I am one of these guys that need to prove that everything makes sense. That's why.
As a bonus this exercise on rewriting the old script with the new library was also useful to understand kNN in more detail.
ruckard
⋅
2) The second problem is that you encourage to call the wrapper function [B]doKNN[/B] twice in the same bar.
I mean, that's how I understood it originally.
Unfortunately if you call doKNN in the same bar its embedded [CODE]var knnM = matrix.new<float>(0, 0, 0)[/CODE] gets reset and you don't get the right piece of advice because the matrix is empty.
It's quite astonishing what happens in PineScript with this scenario (call twice a function that has an embedded var variable)... in the next bar what you have saved there is not lost... it's still there.
ruckard
⋅
First of all: Thank you for the library!
I have been tinkering with it somewhat and I have found a couple of issues with it.