You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When predicting the label of a sample with prediction_value = 0.0, the predicted class is 1 with the "greater or equal" inequality in groot/toolbox.py:
So it is for adversarial accuracy with tree ensembles, as the inequality is also "greater or equal" in groot.verification/kantchelian_attack.py:
def optimal_adversarial_example(self, sample, label):
if self.binary:
pred = (
1 if self.check(sample, self.json_model) >= self.pred_threshold else 0
)
with self.pred_threshold = 0.0
However, when computing the adversarial accuracy for a single decision tree, in groot/verification/decision_tree_attack.py, the inequality is strict.
def adversarial_examples(self, X, y, order, options={}):
# Turn 'leaves' into bounding boxes and leaf prediction values
bound_dicts, leaf_values = zip(*self.leaves)
predictions = [value > 0 for value in leaf_values]
For consistency, I think the inequality should be large too
The text was updated successfully, but these errors were encountered:
Hello,
When predicting the label of a sample with prediction_value = 0.0, the predicted class is 1 with the "greater or equal" inequality in
groot/toolbox.py
:So it is for adversarial accuracy with tree ensembles, as the inequality is also "greater or equal" in
groot.verification/kantchelian_attack.py
:with self.pred_threshold = 0.0
However, when computing the adversarial accuracy for a single decision tree, in
groot/verification/decision_tree_attack.py
, the inequality is strict.For consistency, I think the inequality should be large too
The text was updated successfully, but these errors were encountered: