Thursday , January 28 2021

Academic | NIPS 2018 Confrontational visual challenge results announced: CMU Xingbo team wins two champions | Model | Xing Bo | Challenge |



From medium

Author: Wieland Brendel

The heart of the machine

Participation: Zhang Qian, Wang Shuting

Today, NIPS 2018 announced the Anti-Visual Challenge results. The game is divided into three units: defense, uninterested attacks and targeted attacks. The CMU Xingbo team won two championships, the other champion was captured by the LIVIA team from Canada, and the Tsinghua TSAIL team won the runner-up of "Untargeted Attack". This article describes the method committee for these teams, but the details will be revealed at the NIPS Competition Seminar on December 7th, 9:15 to 10:30.

NIPS 2018 Confrontation Visual Challenge Address: https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-robust-model-track

Today, NIPS 2018 announced NIPS Adversarial Vision Challenge 2018, with more than 3000 participating teams that send over 3000 models and attack methods. This year's competition focused on real scenarios, with little access to the model (up to 1000 per sample). The model returns only the final result they give instead of the gradient or confidence score. This approach simulates the typical hots scenarios for the introduction of machine learning systems and is expected to develop effective decision-based attack methods and build more robust models.

The complete model track on the CrowdAI platform.

All winners perform at least one order of magnitude better than a default baseline (for example, a normal model or border attack migration) (based on the median size of the L2 interference). We asked for the contours for their attitude to the top three games in each game (defense, wrong attack, targeted attack). The winners present their approach at the NIPS Competition Seminar on December 7th. 9:15 to 10:30.

The common theme of attacking trackwinners is the low-frequency version of the border attack and the combination of different defense methods as an alternative model. In the model track, the winners used a new robust model orientation (details may not be known for the seminar) and a new gradient-based iterative L2 attack for match training. In the next few weeks, we will post again to send more information about the results, including visualization of the anti-trial generated for the defense model. The winning team will be notified in a few weeks.

Defense

First Place: Petuum-CMU Team (Code Name "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongyang Zhang *, Susu Xu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* contributed equally represented), respectively from the Petuum Inc company, Carnegie Mellon University, University of Virginia.

In order to learn the deep network robust against the anti-test, the authors analyzed the generalization performance of the model robust against the test. Based on the analysis, the author proposes a new formula to learn robust models with generalization and robustness guarantees.

Second place: Wilson team (has not received a reply from the team yet)

Third place: LIVIA team (code name "Jerome R" on the top list)

Author: Jérôme Rony & Luiz Gustavo Hafemann from Montreal, Quebec, Canada Higher Technical School (ETS Montreal, Canada)

The authors trained a robust model with the proposed gradient-based new iterative L2 attack (Decoupled Direction and Norm, DDN), which is fast enough for use in exercise. In each training step, the author finds a confrontation sample (with DDN) close to the decision limit and minimizes cross entropy in this example. The model architecture has not changed and has no effect on the counting time.

Injured attack

First place: LIVIA team (code name "Jerome R" on the top list)

Author: Jérôme Rony & Luiz Gustavo Hafemann from Montreal, Quebec, Canada Higher Technical School

The attack method is based on a number of proximal models (including the new attack method proposed by the author – a robust model of DDN training). For each model, the author chose two directions to attack: the overhead entropy loss rate of the original category and the direction given by running the DDN attack. For each direction, the author performs a binary search on the norm to find the decision limit. The author takes the best attack and improves it through the border control method in decision-based opposite attacks: Reliable attacks against black-box machine learning models.

Second place: TSAIL team (code name "csy530216" on the leaderboard)

Author: Shuyu Cheng & Yinpeng Dong

The author uses a heuristic search algorithm to improve the anti-sample, which resembles the border attack method. The BIM attack used the baseline of Adversarial Logit Pairing to migrate and find the starting point. In each iteration, the random interference is sampled from a Gaussian division with a diagonal covariance matrix updated by previously successful attempts to simulate the search direction. The author limits the disturbance to the middle of the 40 * 40 * 3 area in the 64 * 64 * 3 image. It first generates a 10 * 10 * 3 noise and then adjusts it to 40 * 40 * 3 using automotive interpolation. Limiting the search space makes the algorithm more efficient.

Third place: Petuum-CMU team (code name "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongyang Zhang *, Susu Xu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* contributed equally represented), respectively from the Petuum Inc company, Carnegie Mellon University, University of Virginia.

The authors integrated various robust models and various countermeasures against multiple-range measurement of distance from Foolbox to generate disturbances. In addition, they chose the best attack method to minimize the maximum distance when attacking robust models during different range measurements.

Painted attack

First Place: Petuum-CMU Team (Code Name "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongyang Zhang *, Susu Xu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* contributed equally represented), respectively from the Petuum Inc company, Carnegie Mellon University, University of Virginia.

The authors used the Foolbox to integrate different robust models and different anti-test methods to generate anti-interference. They found that the integration method makes the target attack model more efficient for different robust models.

Second place: Fortis team (code name "ttbrunner" on the leaderboard)

Author: Thomas Brunner & Frederik Diehl, Institute GmbH from Germany Fortiss

This attack method resembles a border attack, but is not collected from a random normal distribution. In addition, the author uses a low-frequency mode that is well-migrated and not easily filtered by the defender. The author also uses the projection gradient for the surrogate model as a priority for sampling. In this way, they combine the benefits of both (PGD and border attacks) in a flexible and collective-effective attack method.

Third place: LIVIA team (code name "Jerome R" on the top list)

Author: Jérôme Rony & Luiz Gustavo Hafemann from Montreal, Quebec, Canada Higher Technical School

from

<! –

from

Disclaimer: Sina's exclusive script, unauthorized reproduction is prohibited.

->


Source link