Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consistency of the predictions #10

Open
feroi35 opened this issue Jun 28, 2023 · 0 comments
Open

Consistency of the predictions #10

feroi35 opened this issue Jun 28, 2023 · 0 comments

Comments

@feroi35
Copy link

feroi35 commented Jun 28, 2023

Hello,

When predicting the label of a sample with prediction_value = 0.0, the predicted class is 1 with the "greater or equal" inequality in groot/toolbox.py:

def predict(self, X):
        prediction_values = self.decision_function(X)
        if self.n_classes == 2:
            return (prediction_values >= 0).astype(int)
        else:
            return np.argmax(prediction_values, axis=1)

So it is for adversarial accuracy with tree ensembles, as the inequality is also "greater or equal" in groot.verification/kantchelian_attack.py:

def optimal_adversarial_example(self, sample, label):
        if self.binary:
            pred = (
                1 if self.check(sample, self.json_model) >= self.pred_threshold else 0
            )

with self.pred_threshold = 0.0

However, when computing the adversarial accuracy for a single decision tree, in groot/verification/decision_tree_attack.py, the inequality is strict.

def adversarial_examples(self, X, y, order, options={}):
        # Turn 'leaves' into bounding boxes and leaf prediction values
        bound_dicts, leaf_values = zip(*self.leaves)
        predictions = [value > 0 for value in leaf_values]

For consistency, I think the inequality should be large too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant