Advertisement
akashtadwai

avalanche-discussion

Jul 23rd, 2021
249
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.48 KB | None | 0 0
  1. Continuing on [this](https://github.com/ContinualAI/avalanche/discussions/603) discussion, we were experimenting with the impact of task orders in imbalanced datasets and tried to implement the same setup using avalanche. For a quick recap,
  2. We are running Class Incremental CL Scenario on the IDS17 dataset. There are 15 classes in the dataset, and we divided them under `2` labels, i.e., `0` is the normal class, and the rest are attack classes. We gave a label of `0` to normal and `1` to attack class. We have divided the dataset into five tasks and kept three classes in each task.
  3. Example of [task ordering](https://github.com/ReethuVinta/AnomalyDetection/blob/master/IDS17/simpleMLP.py#L208)
  4. ```python
  5. {'1': [0,1,2], '2': [3,4,5], '3': [6,7,8], '4': [9,10,11], '5': [12,13,14]}
  6. ```
  7. here 0,1,2 etc., are classes. We then used avalanche's [tensor_scenario](https://github.com/ReethuVinta/AnomalyDetection/blob/master/IDS17/simpleMLP.py#L252) to declare the Avalanche dataset.
  8. We trained it using [SimpleMLP](https://github.com/ReethuVinta/AnomalyDetection/blob/master/IDS17/simpleMLP.py#L202) Architecture with [GEM](https://github.com/ReethuVinta/AnomalyDetection/blob/master/IDS17/simpleMLP.py#L290) plugin in avalanche. When we trained in this setup before (a few months ago), the results we obtained were as expected and reproducible. However, recently when we are trying to rerun them with the same parameters we had before, the model is **not recognizing** any of the samples as a `positive label`. We got the following confusion matrix independent of how many times we ran them. We tried to change the number of epochs and other parameters but the confusion matrix looked the same
  9. ```python
  10. [[750123, 0],
  11. [184023, 0]]
  12. ```
  13. Where true positives, false positives are coming as *0*s.
  14.  
  15. Amazed by seeing this, we then tried to redefine tasks such that all the classes are placed in a single task (Normal ML scenario where all the data is seen at once), i.e., task order = {[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14]}. Ideally, we would expect a good confusion matrix as this is a simple Machine learning task (as we sent all the data at once to the model). Nevertheless, again, we see the same thing happening there, i.e., the model is **not recognizing** any of the samples as a `positive label`. Any help in this regard will be very helpful for us.
  16.  
  17. complete code is on [github](https://github.com/ReethuVinta/AnomalyDetection/tree/master/IDS17). Please let us know if you need any clarity on the implementation.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement