id
stringlengths 9
11
| note_id
stringlengths 9
11
| forum
stringlengths 9
11
| title
stringlengths 17
133
| authors
sequencelengths 1
8
| venue
stringclasses 10
values | year
stringclasses 1
value | abstract
stringlengths 385
3.15k
| keywords
sequencelengths 1
10
| pdf_url
stringlengths 39
41
| bibtex
stringlengths 178
414
⌀ | date
stringlengths 13
13
| reviews_detailed
stringlengths 2
29.6k
| num_reviews
stringclasses 5
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
uqm1Dy8UN-I | uqm1Dy8UN-I | uqm1Dy8UN-I | Learning to Control PDEs with Differentiable Physics | [
"Anonymous"
] | ICLR.cc 2020 Workshop | 2020 | Predicting outcomes and planning interactions with the physical world are long-standing goals for machine learning. A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom. Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters. We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames. We propose to split the problem into two distinct tasks: planning and control. To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters. Both stages are trained end-to-end using a differentiable PDE solver. We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations. | [
"PDEs",
"optimal control",
"shooting methods",
"optimization",
"fluid simulation"
] | https://openreview.net/pdf?id=uqm1Dy8UN-I | @misc{
anonymous2019learning,
title={Learning to Control {PDE}s with Differentiable Physics},
author={Anonymous},
year={2019},
url={https://openreview.net/forum?id=uqm1Dy8UN-I}
} | 1587924719008 | [] | 0 |
u9LrVLDu5i | u9LrVLDu5i | u9LrVLDu5i | Stochastic gradient algorithms from ODE splitting perspective | [
"Daniil Merkulov",
"Ivan Oseledets"
] | ICLR.cc 2020 Workshop | 2020 | We present a different view on stochastic optimization, which goes back to the splitting schemes for approximate solutions of ODE. In this work, we provide a connection between stochastic gradient descent approach and first-order splitting scheme for ODE. We consider the special case of splitting, which is inspired by machine learning applications and derive a new upper bound on the global splitting error for it. We present, that the Kaczmarz method is the limit case of the splitting scheme for the unit batch SGD for linear least squares problem. We support our findings with systematic empirical studies, which demonstrates, that a more accurate solution of local problems leads to the stepsize robustness and provides better convergence in time and iterations on the softmax regression problem. | [
"SGD",
"Splitting",
"ODE"
] | https://openreview.net/pdf?id=u9LrVLDu5i | @inproceedings{
merkulov2019stochastic,
title={Stochastic gradient algorithms from {ODE} splitting perspective},
author={Daniil Merkulov and Ivan Oseledets},
booktitle={ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations},
year={2019},
url={https://openreview.net/forum?id=u9LrVLDu5i}
} | 1582750165107 | [] | 0 |
mTmgaxwynS | mTmgaxwynS | mTmgaxwynS | Constrained Neural Ordinary Differential Equations with Stability Guarantees | [
"Aaron Tuor",
"Jan Drgona",
"Draguna Vrabie"
] | ICLR.cc 2020 Workshop | 2020 | Differential equations are frequently used in engineering domains, such as modeling and control of industrial systems, where safety and performance guarantees are of paramount importance. Traditional physics-based modeling approaches require domain expertise and are often difficult to tune or adapt to new systems. In this paper, we show how to model discrete ordinary differential equations (ODE) with algebraic nonlinearities as deep neural networks with varying degrees of prior knowledge. We derive the stability guarantees of the network layers based on the implicit constraints imposed on the weight's eigenvalues. Moreover, we show how to use barrier methods to generically handle additional inequality constraints. We demonstrate the prediction accuracy of learned neural ODEs evaluated on open-loop simulations compared to ground truth dynamics with bi-linear terms. | [
"Deep Learning",
"Ordinary Differential Equations",
"Physics Informed Machine Learning",
"Physics Informed Neural Networks",
"Eigenvalue Constraints"
] | https://openreview.net/pdf?id=mTmgaxwynS | @inproceedings{
tuor2019constrained,
title={Constrained Neural Ordinary Differential Equations with Stability Guarantees},
author={Aaron Tuor and Jan Drgona and Draguna Vrabie},
booktitle={ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations},
year={2019},
url={https://openreview.net/forum?id=mTmgaxwynS}
} | 1582750164638 | [] | 0 |
_uPd3skTsj | _uPd3skTsj | _uPd3skTsj | Differential Equations as a Model Prior for Deep Learning and its Applications in Robotics | [
"Michael Lutter",
"Jan Peters"
] | ICLR.cc 2020 Workshop | 2020 | For many decades, much of the scientific knowledge of physics and engineering has been expressed via differential equations. These differential equations describe the underlying phenomena and the relations between different interpretable quantities. Therefore, differential equations are a promising approach to incorporate prior knowledge in machine learning models to obtain robust and interpretable models. In this paper, we summarize a straight forward approach to incorporate deep networks in differential equations to solve first-order non-linear differential equations by minimising the residual end-to-end. We describe the deep differential network that computes the functional value and smooth Jacobians in closed form. Afterwards, we demonstrate that the deep network Jacobians approximate the symbolic Jacboian and apply the proposed approach two robotics applications. These applications use differential equations as model prior for deep networks to learn physically plausible models and optimal feedback control. | [
"Deep Learning",
"Differential Equations",
"Physics Prior",
"Robotics"
] | https://openreview.net/pdf?id=_uPd3skTsj | @inproceedings{
lutter2019differential,
title={Differential Equations as a Model Prior for Deep Learning and its Applications in Robotics},
author={Michael Lutter and Jan Peters},
booktitle={ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations},
year={2019},
url={https://openreview.net/forum?id=_uPd3skTsj}
} | 1582750149652 | [] | 0 |
YDNzrQRsu | YDNzrQRsu | YDNzrQRsu | Differentiable Molecular Simulations for Control and Learning | [
"Wujie Wang",
"Simon Axelrod",
"Rafael Gómez-Bombarelli"
] | ICLR.cc 2020 Workshop | 2020 | Molecular simulations use statistical mechanics at the atomistic scale to enable both the elucidation of fundamental mechanisms and the engineering of matter for desired tasks. Non-quantized molecular behavior is typically simulated with differential equations parameterized by a Hamiltonian, or energy function. TheHamiltonian describes the state of the system and its interactions with the environment. In order to derive predictive microscopic models, one wishes to infer a molecular Hamiltonian from macroscopic quantities. From the perspective of engineering, one wishes to control the Hamiltonian to achieve desired macroscopic quantities. In both cases, the goal is to modify the Hamiltonian such that bulk properties of the simulated system match a given target. We demonstrate how this can be achieved using differentiable simulations where bulk target observables and simulation outcomes can be analytically differentiated with respect to Hamiltonians. Our work opens up new routes for parameterizing Hamiltonians to infer macroscopic models and develops control protocols | [
"Molecular Dynamics",
"Quantum Dynamics",
"Differentiable Simulations",
"Statistical Physics",
"Machine Learning"
] | https://openreview.net/pdf?id=YDNzrQRsu | @inproceedings{
wang2019differentiable,
title={Differentiable Molecular Simulations for Control and Learning},
author={Wujie Wang and Simon Axelrod and Rafael G{\'o}mez-Bombarelli},
booktitle={ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations},
year={2019},
url={https://openreview.net/forum?id=YDNzrQRsu}
} | 1582750162764 | [] | 0 |
ObkQpUsR-x | ObkQpUsR-x | ObkQpUsR-x | A Free-Energy Principle for Representation Learning | [
"Yansong Gao",
"Pratik Chaudhari"
] | ICLR.cc 2020 Workshop | 2020 | We employ a formal connection of machine learning with thermodynamics to characterize the quality of learnt representations for transfer learning. We discuss how information-theoretic functionals such as rate, distortion and classification loss of a model lie on a convex, so-called equilibrium surface. We prescribe dynamical processes to traverse this surface under constraints, e.g., an iso-classification process that trades off rate and distortion to keep the classification loss unchanged. We demonstrate how this process can be used for transferring representations from a source dataset to a target dataset while keeping the classification loss constant. Experimental validation of the theoretical results is provided on standard image-classification datasets.
| [
"information theory",
"thermodynamics",
"rate-distortion theory",
"transfer learning",
"information bottleneck"
] | https://openreview.net/pdf?id=ObkQpUsR-x | @inproceedings{
gao2019a,
title={A Free-Energy Principle for Representation Learning},
author={Yansong Gao and Pratik Chaudhari},
booktitle={ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations},
year={2019},
url={https://openreview.net/forum?id=ObkQpUsR-x}
} | 1582750159476 | [] | 0 |
Jxv0mWsPc | Jxv0mWsPc | Jxv0mWsPc | Fast Convergence for Langevin with Matrix Manifold Structure | [
"Ankur Moitra",
"Andrej Risteski"
] | ICLR.cc 2020 Workshop | 2020 |
In this paper, we study the problem of sampling from distributions of the form p(x) \propto e^{-\beta f(x)} for some function f whose values and gradients we can query. This mode of access to f is natural in the scenarios in which such problems arise, for instance sampling from posteriors in parametric Bayesian models. Classical results show that a natural random walk, Langevin diffusion, mixes rapidly when f is convex. Unfortunately, even in simple examples, the applications listed above will entail working with functions f that are nonconvex -- for which sampling from p may in general require an exponential number of queries.
In this paper, we study one aspect of nonconvexity relevant for modern machine learning applications: existence of invariances (symmetries) in the function f, as a result of which the distribution p will have manifolds of points with equal probability. We give a recipe for proving mixing time bounds of Langevin dynamics in order to sample from manifolds of local optima of the function f in settings where the distribution is well-concentrated around them. We specialize our arguments to classic matrix factorization-like Bayesian inference problems where we get noisy measurements A(XX^T), X \in R^{d \times k} of a low-rank matrix, i.e. f(X) = \|A(XX^T) - b\|^2_2, X \in R^{d \times k}, and \beta the inverse of the variance of the noise. Such functions f are invariant under orthogonal transformations, and include problems like matrix factorization, sensing, completion. Beyond sampling, Langevin dynamics is a popular toy model for studying stochastic gradient descent. Along these lines, we believe that our work is an important first step towards understanding how SGD behaves when there is a high degree of symmetry in the space of parameters the produce the same output. | [
"Langevin",
"diffusion",
"Ricci Curvature",
"Poincare inequality"
] | https://openreview.net/pdf?id=Jxv0mWsPc | @inproceedings{
moitra2019fast,
title={Fast Convergence for Langevin with Matrix Manifold Structure},
author={Ankur Moitra and Andrej Risteski},
booktitle={ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations},
year={2019},
url={https://openreview.net/forum?id=Jxv0mWsPc}
} | 1582750157140 | [] | 0 |
oeM-Rb7nqoh | oeM-Rb7nqoh | oeM-Rb7nqoh | Teaching Computational Machine Learning (without Statistics) | [
"Anonymous"
] | ECMLPKDD.org 2020 Workshop | 2020 | This paper presents an undergraduate machine learning course that emphasizes algorithmic understanding and programming skills while assuming no statistical training. Emphasizing the development of good habits of mind, this course trains students to be independent machine learning practitioners through an iterative, cyclical framework for teaching concepts while adding increasing depth and nuance. Beginning with unsupervised learning, this course is sequenced as a series of machine learning ideas and concepts with specific algorithms acting as concrete examples. This paper also details course organization including evaluation practices and logistics. | [
"machine learning education",
"undergraduate education",
"computational machine learning"
] | https://openreview.net/pdf?id=oeM-Rb7nqoh | @inproceedings{
anonymous2020teaching,
title={Teaching Computational Machine Learning (without Statistics)},
author={Anonymous},
booktitle={ECML PKDD 2020 Workshop Teaching ML},
year={2020},
url={https://openreview.net/forum?id=oeM-Rb7nqoh}
} | 1594219870829 | [{"text": "Teaching students practical machine learning \n\nThis paper describes the structure and content of an instructional machine learning course. The focus on the practical aspects of machine learning for this course is an important idea. From personal experience with teaching Machine Learning, I can confirm that the knowledge about the mathematical background and the structure of the algorithms alone is not sufficient to apply these methods to specific problems as there are further technical challenges to solve. \n\n**Positive**\n\n- The focus on practical aspects of applying machine learning\n- The repeated contact with concepts, in different stages of the learning process (cyclic framework)\n- The idea that communicating technical concepts is an essential skill\n- The usage of version control and testing is part of the course\n- Bonus points for incorporating the ethical implications of ML algorithms. \n\n**Open Questions**\n- Is the focus on implementing the algorithms or applying them? For example: Implementing a k-means clustering correctly might be a different challenge than applying it to a particular problem. I am unsure whats the focus of the course. \n- Some information about the content of the \"model evaluation\" part of the course would be great. E.g. used metrics, testing procedures etc.\n- This paper gives a general overview of the topic (due to length regulations), some more details about the topics and tasks would help. It would be great if the course material would be publicly available. \n\n**Minor Questions**\n\n- I am unsure why Table 1. lists \"Linear Regression\" as \"Train/Test Paradigm\". Would expect something like \"n-fold cross-validation\".\n", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "The paper highlights a course on computational ML offered in a CS program. The course is intended as an accessible entry point to ML for undergraduate students, and assumes programming background and some mathematical maturity (either LA or multi-variable calculus are required). \n\nPros: \n\nIn addition to introducing ML and its links with CS, the course also poses the interesting question on what habits of mind are needed for ML practitioners. This is, in my experience, a necessary skill for ML practitioners where most practical problems come with unique caveats. The learning outcomes for the course likewise cover a fairly broad spectrum of topics, ranging from implementing to understanding different ML algorithms. The course also addresses ethical concerns and collaborative development, which is exciting to see at such an early stage. Use of continuous integration and unit testing is also a welcome addition to the course. The cyclic framework should be an excellent tool for reinforcing learning outcomes, I really liked the concept!\n\nSome open questions include:\n1. The authors decided to start the course with unsupervised learning algorithms rather than supervised learning. While I do not have an issue with this reordering naturally, I was wondering if the authors uncovered any evidence that one way or the other was more effective. This would be very useful for the broader community.\n2. While the course covers continuous integration, is this limited to the code the students write or does it extend also to the models they train? Do students deal with concepts such as how to serialise, share and/or update their models in an online setting?\n3. I would be very interested in reading some of the feedback the authors received (so tying back to point 1). Also, the author's experience of using Github Classroom and Travis CI in this setting would be very interesting. If lack of space is an issue, perhaps the authors could include this information in their presentation?", "rating": "7: Good paper, accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "The paper describes a course for computer science students. The course teaches concepts of machine learning with a focus on implementation of methods. On the side it teaches important skill like version control and clitical thinking.\n\n## Pros:\nThe paper nicely describes this introductory machine learning course. \nThe focus on implementation is unusual, yet interesting. \nThe cyclic framework described in the paper, as well as the active classroom, the flexibility system in assignments and the use of computational notebooks and continuous integration, are exciting to see.\n\n## Cons:\nI would have hoped for a link to the teaching material or at least an example showing one of the notebooks.\nAlso I remain with one question: Are the teaching materials openly available? If not, why?\nIf space in the paper was the issue, I would rather neglect the description of the two courses (stats and CS) and the college/students . \n\nAlso, it would have been nice to read, what the students think about the coures (e.g. course evaluation results).\n\n\n### Minor comments:\n- Learning objectives: the first point seems like two points to me\n- Learning objectives: assess efficacy is mentioned in both points 2 and 3 and thus could be deleted in point 2\n- Evaluation: \"mix a severeal machine learning ideas\" -> \"mix severeal machine learning ideas\"\n- Evaluation: what do you mean by \"slack site\"?\n", "rating": "7: Good paper, accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}] | 3 |
jm5E97TTMEb | jm5E97TTMEb | jm5E97TTMEb | Turning Software Engineers into Machine Learning Engineers | [
"Anonymous"
] | ECMLPKDD.org 2020 Workshop | 2020 | A first challenge in teaching machine learning to software engineering and computer science students consists of changing the methodology from a constructive design-first perspective to an empirical one, focusing on proper experimental work. On the other hand, students nowadays can make significant progress using existing scripts and powerful (deep) learning frameworks -- focusing on established use cases such as vision tasks. To tackle problems in novel application domains, a clean methodological style is indispensable. Additionally, for deep learning, familiarity with gradient dynamics is crucial to understand deeper models. Consequently, we present three exercises that build upon each other to achieve these goals. These exercises are validated experimentally in a master's level course for software engineers. | [
"teaching",
"backpropagation",
"methodology"
] | https://openreview.net/pdf?id=jm5E97TTMEb | @inproceedings{
anonymous2020turning,
title={Turning Software Engineers into Machine Learning Engineers},
author={Anonymous},
booktitle={ECML PKDD 2020 Workshop Teaching ML},
year={2020},
url={https://openreview.net/forum?id=jm5E97TTMEb}
} | 1594219872623 | [{"text": "Summary:\nThe paper first talks about the current state of learning machine learning and the challenges software engineers face when learning machine learning. The authors suggest hyperparameter tuning, data splitting, and gradient signals are the main topics to learn.\n\nStrong Points:\n1. Authors designed specific exercises to help students learn hyperparameter tuning, data splitting, and gradient signals.\n2. Specific exercises can be used not just for software engineers, but for anyone with an engineering or quantitative background.\n3. Explanations of the exercises and what they provide to students are clear.\n\nWeak Points:\n1. The entire paper is based on the observation \"We noted that especially computer science and software engineering students tended to struggle with the adoption of an empirical mindset rather than a constructive one.\" While it sounds reasonable, there are no additional examples or details explaining how software engineers struggle or why.\n2. It's not clear how the authors chose hyperparameter tuning, data splitting, and gradient signals as the main topics. Why are these more important than learning about model architectures and their applications?\n3. There's no quantitative difference provided showing the effectiveness of the results provided. I would have liked to see pre-test and post-test scores of students based on this teaching method. It would have been even nicer if there was a way to compare it to existing teaching methods.", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "The authors of this submission identified three major attention points of ML teaching when introducing the topic to software engineering students. These are hyperparameter tuning,\nproper data splitting and knowledge of gradient computations and workings of backpropagation.\n\nThose principles are taught with the help of self created jupyter notebooks, providing an interactive widget for a simple linear regression problem for visualizing the effect of different learning rates, the proper use of data splitting for hyperparameter tuning and an example implementation of automatic differentiation.\n\nWhat I really liked about the paper is that the authors reported first hand experiences of their students using this approach, including what went wrong in many cases. Unsurprisingly, this seemed to have been the part about proper data splitting. This identifies areas which probably need to be addressed better in future design of teaching materials.\nAlso, the authors invested great efforts in the very readable design of small toy classes showing how backpropagation via autodiff can be implemented. I assume this will be of great value for understanding the working principles of large frameworks like pytorch or tensorflow, especially for software engineering students.\n\nOn the other hand, I think that such a code-heavy approach might not be directly transferable to different audiences which might be less familiar with programming or software design.\n\nMoreover, even though I agree that using jupyter notebooks has become a frequently used and pretty much standard way for teaching and demonstration within the data science community, I doubt that the authors used it as their only means of teaching in ML classes and they could have elaborated a bit on how they encapsulate their notebooks into their regular teaching. But maybe that had to be left out due to paper space restrictions.\n\n", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "### Summary: \nThis work focuses on teaching Machine Learning (ML) to Software Engineers (SE). There is a need for change of thought process between SE and ML due to constructive and empirical mindset. Authors specifically focused on three aspects of ML, namely, 1. Hyperparam tuning, 2. Data handling and 3. Backprop. To address these challenges, authors created materials which explains the underlying concepts and how they differ from SE. Authors report various findings on the effectiveness of the materials and teaching methodology on the group of students.\n\n### Strong Points:\n* Materials are self-explanatory and useful for all students over and beyond SE students\n* Motivation on each material is well thought out and described very succinctly\n\n### Weak Points:\n* I found the description on the 3rd material is a bit lacking on the self-explanation part compared to the other two materials, although the code is written well with sufficient comments in all\n* A little more detail on the \u201cnormal equation\u201d might be helpful for comparison to students who are not aware of it\n\n### Other comments:\n* Small suggestion: Resolution of Fig. 2 is very low. Generating the figures in either \u201cpdf\u201d or \u201ceps\u201d format might retain the figure quality", "rating": "9: Top 15% of accepted papers, strong accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}] | 3 |
dhmIJ9SwAuG | dhmIJ9SwAuG | dhmIJ9SwAuG | Introductory Machine Learning for non STEM students | [
"Anonymous"
] | ECMLPKDD.org 2020 Workshop | 2020 | Data Science in general, and Machine Learning i nparticular, is a powerful tool for decision-makers across non-STEM fields like Human Resources Management, Law or Marketing. Introductory Machine Learning, for non-majors that lack a strong background in Statistics and Computer Science, is a challenge for both teacher and students. The use of similes and games is a soft way to deal with definitions of concepts and procedures that are essential for further advanced courses on these subjects. | [
"Machine Learning",
"Law",
"Business"
] | https://openreview.net/pdf?id=dhmIJ9SwAuG | @inproceedings{
anonymous2020introductory,
title={Introductory Machine Learning for non {STEM} students},
author={Anonymous},
booktitle={ECML PKDD 2020 Workshop Teaching ML},
year={2020},
url={https://openreview.net/forum?id=dhmIJ9SwAuG}
} | 1594219867776 | [{"text": "This paper describes the curriculum of an introductory ML course given to non-STEM students at a Spanish higher education school. This course is embedded in a double degree program where graduate courses are offered that mix law, business and engineering. The material conveyed is then presented in the paper and key points for the delivery are highlighted, e.g. what model to introduce first, which metrics to concentrate on, how to angle the motivation for machine learning and how to control expectations. The later lessons of the course then introduce more advanced model architectures like MLPs and SVMs. The final episode of the curriculum are kaggle like projects for the students to work on.\n\nIf I understood correctly, the course is subsequent to a basic introduction to R and python. For sure, this is already a great effort for the students to pick up. What I like about this curriculum, is the focus on a simple linear regression approach to introduce students. Tying the core concepts of machine learning to this, appears to me like a splendid idea as it removes the complex math of say SVMs, MLPs et al from the teaching. This way, learners can focus on understanding supervised learning. Further, I like the idea of discussing this regression problem on a data set the learners can relate to. In this case, it is to predict the weight given the height and gender. This direct relation of the prediction to real world observations appears to be a strong bridge and the basis for content uptake. On a second thought, any discriminatory aspects of the trained model could directly be discussed based on this fiducial analysis. This shows, what a versatile vehicle this data set and prediction task can be. These two aspects are the 2 main outstanding strengths of the article.\n\nThere are a couple of things, I'd like to stress which I hope can help the author(s) to improve the paper content (potentially for the presentation at the workshop on Sep 14):\n\n- page 1, e.g. line 30 (right column) \"First contact of these students with Business Analytics is a 30 hours introductory course during the third semester.\": It might be a good idea to set up learner profiles to illustrate the background knowledge of the participating students and (even more importantly) the goals of the students. I know that in an academic context, knowing where students want to work after their graduation is very hard; however, the mental model of the teacher of where she/he sees the learners after the course are an important aspect to communicate\n\n- page 1, e.g. line 40 (right column) \"The syllabus of the course starts with fundamentals of programming and statistics with the R language\": the text misses out to define clear learning goals; while the argumentation for the curriculum design is convincing, it remains hard to judge if the content can meet the learning goals as the former are not provided anywhere\n\n- page 1, e.g. line 52 \"Lessons 4 to 7 ...\": the text references specific lessons by number, but misses an overview of the number of lessons to be given and their time line; this aspect confused me multiple times. A simple time line would provide a good degree of guidance to the reader here.\n\n- page 2, line 67: the quote by Burkov is lovely, I would have loved to learn in what context it is presented\n\n- page 2, line 103 (left column) \"We have a set of vectors X ...\": this appears to be a bit inconsistent to me, figure 1 discusses $y = f(x_1, ..., x_n)$ but the text talks about $X$ (capital) as the entire set. It would be nice to have consistent variable naming, so that the text is aligned to the figures.\n\n- page 2, line 79 (right): \"the evil over-fitting game\" sounds very biased to me. The text does not explain if or how the notion of evil is explained to the students. I personally would try to dismiss these opinionated terms in courses as much as possible. Here \"evil\" conveys a negative intent. The following paragraph \"The example may sound a bit absurd ...\" for me is unable to fix the confusion about the synthetic over-fitting example. I see the danger here to loose learners at this point.\n\n- page 3, line 111 (left) \"L\u2019Or\u00e9al's Rule to choose the optimal slope (Because I'm worth it)\" I didn't get this metaphor at all.\n\n- page 3, line 113 (left) \"Then we attack the non-linear separable classification of virginica versus versicolor to stress the idea\nthat uncertainty is an inevitable fact for ML models\" I am not sure what attack refers to here. I think introducing students to adversarial attacks is a good idea, but the text leaves it unclear if that the intend\n\n- page 3, line 119 (left): \"The confusion matrix is well understood with the boy who cried wolf example of Type-I error.\" It's unclear to me what the 'boy who cried wolf example' is. A reference would help.\n\n- page 3, line 152 (left): \"The minus sign and $log_2 (p_i )$ are the most feared beasts.\" it is unclear what this means. I urge the authors to use scientific language, please. \n\n- page 3, line 152 (left): \"The minus sign and $log_2 (p_i )$ are the most feared beasts.\" if learners struggle with this, one may wonder if it makes sense to introduce so many tricky concepts as noted in this paragraph and what level of depth are they expected to reach. This would tie back to the learner profiles (if present) mentioned earlier, learner profiles would clearly show which level of understanding is appropriate.\n\nOverall, it would be wonderful to see an objective quality assessment of this curriculum in the future. Given clear learning goals and learner profiles, such a quantitative analysis of e.g. post-course surveys could also help to assess if learning goals have been met.\n\nThanks to the authors for submitting their paper. I enjoyed reading it and learning how the structured their course.", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"text": "The submission describes an introductory course on Machine Learning targeted at\nstudents outside of the science, technology, engineering, and mathematics\n(STEM) fields such as Law or Marketing.\n\nThe submission has great value and can therefore be recommended for acceptance.\nIn particular, it includes reports on student feedback to different parts of\nthe taught course, which will be valuable information for other workshop\nattendees. Additionally, the described course material teaches important\nconcepts such as interpretation of black-box models and overfitting using\nsimple examples. It also includes hands-on coding parts, which is an effective\ntool to increase student motivation.\n\nApart from minor typographical mistakes which will not be covered here, some\nsuggestions to improve future submission include:\n\n* line 002: The submitted paper's title does not exactly match the submission's\n online title.\n* line 051: \"Lessons 4 to 7 cover Machine Learning fundamentals.\": Lesson 1..3\n are not introduced up to this point.\n* line 105: Even though well known, the Iris dataset is missing a citation.\n One could also add a reference backing the \"criticism about its goodness to\n teach ML concepts\".\n* line 118: The \"boy who cried wolf example of Type-I error\" could benefit from\n a short explanation.\n* lines 138 and 146: Fig. 3 and 4 are not referenced in the text body.\n* Even though probably known by most readers, one should write out acronyms\n such as STEM, AOC, ROC, SVM, SMOTE upon first usage.\n\nOther that that, the presented course and especially the author's experience\nwill be beneficial for the present workshop.\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"text": "# Summary \nThe paper presents an experience report of teaching an introductory class on machine learning to business and law students that do not necessarily have a strong background in mathematics and/or programming. On a high level, the paper describes the curriculum, diving deeper into only some aspects. The pathway starts with linear regression and progresses up to logistic regression on the Iris data set, neural networks, and SVMs. Finally, the students are exposed to a practical challenge that they solve using graphical environments provided by Azure ML.\n\n# Overall evaluation\nI find the proposed curriculum not particularly surprising (lin. reg -> neural networks), I was a bit surprised to see SVMs as one of the models the students were exposed to - given that they are mathematically more involved than, e.g., simple neural networks, in my opinion. Other than that I really liked some of the anecdotal cues such as giving a grade bonus based on performance on the final project or using a graphical environment for modeling. Or, the \"evil overfitting game\" -> definitely something to try. I would have appreciated a little more insight into what the students liked/disliked or more tangible material in the form of code and data. Nevertheless I think the authors should participate in the workshop.\n\n# Minor comments:\n\n- Page 2, line 63: Just up for debate: While I understand the intention behind disillusioning students that were exposed to hype-fueled media reports, I am note sure if harshly stating something along the lines of \"machines don't learn\" or \"ML is just glorified curve fitting\" is strategically useful early on in a class. After all, what constitutes learning it is still a matter of definition. A little mystery and fascination (think of chatting with GPT-3 or using Google draw) can be helpful as long as the mathematical and algorithmic underpinning is not neglected. But again, this is just my opinion and another point of view and no criticism of the submission.\n\n- Page 2, line 98: \"predict the weight of an undergraduate student just knowing its height and gender.\" --> \"predict the weight of undergraduate students just knowing their height and gender.\" or \"predict the weight of an undergraduate student just knowing his/her height and gender.\"\n\n- Page 3, line 153: I have made the same experience and found that the minus sign and log become less scary when referring to the information content. An event with a probability of 1/4 is as probable as picking one of (1/4)^(-1) = 4 balls at random and we'd need log(4) bits to distinguish all of these balls. Taking the expected value gives us entropy.", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 3 |
aLdr-6rFn5j | aLdr-6rFn5j | aLdr-6rFn5j | An Interactive Web Application for Decision Tree Learning | [
"Anonymous"
] | ECMLPKDD.org 2020 Workshop | 2020 | Decision tree learning offers an intuitive and straightforward introduction to machine learning techniques, especially when students are used to program imperative code. Most commonly, trees are trained using a greedy algorithm based on information-theoretic criteria. While there are many static resources such as slides or animations out there, interactive visualizations tend to be based on somewhat outdated UI technology and dense in information. We propose a clean and simple web application for decision tree learning that is extensible and open source. | [
"decision tree learning",
"web application",
"interactive tool"
] | https://openreview.net/pdf?id=aLdr-6rFn5j | @inproceedings{
anonymous2020an,
title={An Interactive Web Application for Decision Tree Learning},
author={Anonymous},
booktitle={ECML PKDD 2020 Workshop Teaching ML},
year={2020},
url={https://openreview.net/forum?id=aLdr-6rFn5j}
} | 1594219869358 | [{"text": "This paper presents a web application for learning decision trees (referred to as DTs throughout the paper) as well as their nuances. The authors do an excellent job situating why learning decision trees is important. The web application is well explained from the students perspective, but I would have liked a few more details on the instructor's perspective.\u00a0\n\nThere are a few elements of the\u00a0paper that feel a bit disjointed. For example, the paper spends considerable space on the \"tennis data set\" but yet the default data set for the web application concerns monsters. Similar if the intended audience has a background in computer science, how does this tool use and/or enhance one's computer science training?\u00a0\n\nAdditionally, I would have liked to see a connection to how this tool fits into a machine learning course. For example, is this a student's first or second contact with the material? Do students use this tool to inform pseudocode drafts?\u00a0\n\nPros -\u00a0\n* Well designed tool for learning and understanding the nuances of decision trees\n* Thoughtful presentation of the web application and strong motivation for learning about decision trees\n\nCons -\u00a0\n* Some issues with the structure of the current draft\u00a0\n* Missing explicit placement into a machine learning course", "rating": "6: Marginally above acceptance threshold", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "I can see the potential of the tool introduced here to teach and explain the very basics of decision trees (and how to evaluate classifiers).\n\nBUT:\n\n1. I'm not sure it is entirely finished -- the \"Hint\" functionality did not do anything, as far as I could make out, and best practice for UI design for this kind of interactive tutorial would be to have documentation in the form of tooltips right inside the app.\n\n2. I think the decision to use a non-binary decision tree algorithm like ID3 is suboptimal from a didactic point of view. How to find optimal *binary* splits is much easier to explain/understand and the resulting tree structures are easier to follow as well. \n\n3. This tool only aids in the visual explanation of very simple decision tree topics, which, in my teaching experience, only rarely cause confusion or issues in the classroom, namely the basic \"greedy\" recursive split search and how to use the tree to classify new data. \nThis seems like a wasted opportunity and limits the utility of the proposed tool-- more challenging and practically very important topics that learners often struggle with in my experience, like, e.g., pruning trees to avoid overfitting, how surrogate splits work, the instability of tree structure to small data perturbations, etc, are not covered by the functionality of the tool.\n\n4. This tool is not based on any of the popular ML frameworks in R or Python, which seems another wasted teachingg opportunity. If it were, this would allow students to also see the code generating the respective model and thereby learn to relate theoretical concepts to the corresponding implementation features.", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 2 |
a37DEwWs1wu | a37DEwWs1wu | a37DEwWs1wu | Teaching the Foundations of Machine Learning with Candy | [
"Anonymous"
] | ECMLPKDD.org 2020 Workshop | 2020 | Machine learning is ubiquitous in decision-making processes across society. The presence and development of ML drives a need for improved education in key concepts at the secondary and tertiary levels that not only trains people to become informed citizens but also trains future researchers to be both principled and ethical practitioners. In this vein, we present a structured classroom activity that simultaneously teaches both supervised classification and critically thinking about ML applications and ethics. We use an active, object-based learning approach to teach supervised classification using a variety of candies, and a problem-based scenario to encourage critical questions about ethics in ML applications | [
"Machine Learning",
"Education",
"Undergraduate",
"Secondary Education",
"Active-Learning"
] | https://openreview.net/pdf?id=a37DEwWs1wu | @inproceedings{
anonymous2020teaching,
title={Teaching the Foundations of Machine Learning with Candy},
author={Anonymous},
booktitle={ECML PKDD 2020 Workshop Teaching ML},
year={2020},
url={https://openreview.net/forum?id=a37DEwWs1wu}
} | 1594219871867 | [{"text": "Strong points:\n1. The motivation section is well-written. The authors explain their motivations, demonstrate a societal need for a solution, and also showcase previous methods used like class of clans.\n2. The paper's idea of using candy as a hands-on machine learning teaching method is novel. Overall, the paper communicates a variety of the author's unique perspectives. For example, the authors believe the original Iris dataset was created with benchmarking in mind, not learning. This is another good point.\n3. The episodes and intended learning outcomes section is also well organized into specific topics. The topics themselves are also good introductory topics for teaching machine learning.\n\nWeak points:\n1. While paper starts strong, the conclusion is weak. Before the conclusion, the author explains the motivations and ideas to teach machine learning using candy. The conclusion simply states there's an ethics component included. While that's important to include, I would have preferred to see the outcomes of using such a teaching method. Without that, this paper feels incomplete.", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "The authors present a self designed introductory teaching lesson of about 3 to 4 hours to convey the principles and workings of some of the fundamental aspects of ML: feature engineering,\nKNN and logistic regression as classification methods and performance evaluation in a playful manner using chocolate and candy as a practical toy dataset and additional motivator.\n\nTheir approach follows the guidelines of the Carpentries and aims to develop intuition for ML, bridging the gap between the two most dominant ML teaching styles which either are primarily focussed on\ntheory and might provide a daunting entry barrier on the one hand, or aim at quick implementation and application of ML algorithms without detailed knowledge of the inner workings on the other hand.\n\nI like the hands-on approach suggested here which teaches basic principles by a very plausible example. Also, the authors put strong focus early on to discuss ethical issues of machine learning and encourage lesson participants to think critically about adoption of ML approaches for given problems which is a strong plus in my opinion and often not emphasized enough in existing tutorials.\n\nI would be very curious to learn how lesson participants respond to this approach. Unfortunately, the authors only present their teaching procedure without giving details on how well their approach worked in practice (however that could be measured...), how participants\nreacted, what they had problems with, etc. Also I am missing a clear definition of the target audience that course was designed for. \nAs an additional (but less important) point: as much as I enjoy candy myself and understand the authors picked it for their motivational value, maybe replacing it with a more healthy alternative would be something to think about.\n\nThe authors provide their teaching materials as a github repository which is great and allows direct adoption. Unfortunately, author identification became possible through that and the review can no longer be considered completely \"blind\".\nNevertheless, I suggest accepting the submission since the approach is innovative and will probably invoke interesting discussions among participants.", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "### Summary: \nThis work addresses the middle ground between math-heavy foundational and application-focused black-box Machine Learning (ML) teaching styles. Authors proposed a fun active learning strategy to understand ML concepts and ethical decision processes using candies. The proposed 10 episode strategy along with their expected learning outcomes are publicly available. Authors found their strategy to be useful for not only for undergraduates but individuals from all ages.\n\n### Strong Points:\n* Very well-motivated paper, there is clearly a need for this kind of teaching style to democratize ML\n* Proposed strategies along with the learning outcomes are thoughtful and effective\n* Prerecorded data is made available with support for the virtual format\n\n### Weak Points:\n* Authors mentioned the strategy was effective across all ages. I wonder the effectiveness of teaching with candies for varying age groups. Young children might be too tempted by the candies than actually learning. On the other hand, Older people might get disinterested because they might feel candies are too distracting. Any exploration on the effectiveness of learning across ages will be valuable.\n\n### Other comments:\n* Proper use of this teaching style will definitely be effective to instil critical thinking and improve the public perception towards ML as not a black box, I commend the authors on that.\n* I would love to see how a few of the episodes are actually described in more detail within the paper but I understand there\u2019s so much information that can be crammed into 4 pages.\n", "rating": "10: Top 5% of accepted papers, seminal paper", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}] | 3 |
7lv81_zr4TZ | 7lv81_zr4TZ | 7lv81_zr4TZ | XploreML: An interactive approach to study and explore machine learning models | [
"Anonymous"
] | ECMLPKDD.org 2020 Workshop | 2020 | Due to its achievements in recent years, Machine Learning (ML) is now used in a wide variety of problem domains. Educating ML has hence become an important factor to enable novel applications.
To address this challenge, this paper introduces XploreML -- an interactive approach for lecturers to teach and for students or practitioners to study and explore the fundamentals of machine learning. XploreML allows users to experiment with data preparation, data transformation and a wide range of classifiers. The data sets can be visually investigated in order to understand the complexity of the classification problem. The selected classifier can either be autonomously fitted to the training data or the effect of manually altering model hyperparameters can be explored. Additionally source code of configured ML pipelines can be extracted.
XploreML can be used within a lecture as an interactive demo or by students in a lab session. Both scenarios were evaluated with a user survey, where both variants were assessed as positive with the first yielding more positive feedback.
XploreML can be used online: ml-and-vis.shinyapps.io/XploreML | [
"machine learning education",
"visualization",
"classification",
"human-centered machine learning"
] | https://openreview.net/pdf?id=7lv81_zr4TZ | @misc{
anonymous2020xploreml,
title={Xplore{ML}: An interactive approach to study and explore machine learning models},
author={Anonymous},
year={2020},
url={https://openreview.net/forum?id=7lv81_zr4TZ}
} | 1594219868840 | [{"text": "XploreML is an interactive tool that allows for students\u00a0to explore various classification algorithms dynamically. The paper's current presentation of XploreML leaves it to the reader to determine the learning goals of this tool. Additionally, while the authors discuss using backward design in their implementation for using this tool, without the learning goals specified, it is challenging to understand where and how this tool comes into a machine learning course. Is it the first, second, last, etc contact for a student learning a concept? Finally, one of the features that I was most interested in was the code extraction. I felt that this was shortchanged in the current draft.\u00a0\u00a0\nThe user study presented in this paper does make an effort to address this second issue, by having two different groups of students interact with XploreML either in a lecture or lab setting. However, there is no control group that never works with this tool. Additionally, the survey questions from the user study are about how much one likes the tool (which certainly has its place), instead of an evaluation tool that attempts to determine if working with XploreML leads to deeper understanding of classification.\u00a0\n\nPros -\u00a0\n* Interesting and accessible tool (via `shiny`) \n* Demonstrates that XploreML can be used in lecture or lab settings\n\nCons -\u00a0\n* Does not explicitly connect XploreML to learning outcomes\n* Difficult to determine the impact of this tool on learning\n\nMinor comments:\u00a0\u00a0\n* It would be great if the abstract discussed the level of the students that this tool is intended for \n* Typo in left side of line 041 \"aswell' --> \"as well\" \n* Right side of lines 051 and 052, the authors might consider using a different typeface for the various packages (such as \\texttt{})", "rating": "4: Ok but not good enough - rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "XploreML presents an interactive GUI for learning fundamentals of supervised classification models. This app has potential to be used as a training tool, however, implementation lacks clarity and stability.\n\n- One major issue is that sever gets disconnected quite often and it is set to default settings when reloaded again. This means that while mentors/students are discussing aspects of current implementation the results will be gone and everything needs to be set again from scratch. \n- The app offers selection of only a few hyperparameters. Learning rate in the neural net, depth in random forest are important adjustable hyperparameters. Also, the number of CV folds and bootstraps and train, validation and test split should be adjustable. \n- The raw data tab in visualization pane does not show class labels. Complete visualization of raw data with class labels is a very fundamental aspect of beginning ML teaching/learning process. \n- Furthermore, in the visualization of data the \u2018parallel coordinates\u2019 does not help understand data for beginners rather it looks confusing. For beginners, clear plots should be used such that students can easily read and understand data stats themselves without a mentor\u2019s intervention. Plots of data with more than 4 features are almost unreadable.\n- While running a new model, the results and stats from the previous run remain intact, which is very confusing.\n- A very basic model taught in supervised binary classification is the logistic regression model which is missing in this app. Although LDA is closely related to logistic regression, but for beginners, it will make more sense to make them familiar with logistic regression for classification than LDA.\n\nSummary:\nThe authors propose an interactive GUI for exploring basic supervised classification models, however, it is not very handy in its current state. It offers only a few adjustable parameters, inconsistent view, and unstable server. All these factors make it unlikely to be accepted. ", "rating": "5: Marginally below acceptance threshold", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "This *could* be a great resource for experimentation in supervised learning without any programming. The app is well-documented and covers most classifiers that a standard intro to supervised ML would contain.\n\nFor courses that are based on the software stack used here (R + caret), the fact that the code generating the models can be inspected, too, is even more valuable.\n\nUnfortunately, the implementation seems to be very unstable, buggy and slow:\n\n- xgboost did not work on any of the datasets I tried, it would hang for a couple of minutes and then spit out a generic error message\n- results from previous runs were not updated, so one tab would show classification boundaries for a decision tree, say, while the other tab would still show confusion matrices for a LDA model computed earlier. Didactically, that's terrible, because students will not have the self-confidence and experience to recognize the wrong/incongruopus results presented by the app. At the very least, the interface should be programmed so that results that are \"out of date\" are greyed out until they are recomputed for the current model.\n- the server hosting it was very unresponsive for most of the time I tried to experiment with it, loading the app and reccomputing even very simple models like LDA took a (very) long time on all three occasions that I used it\n- there seems to be no way to abort computations once they are under way and users lose patience. This is not a good UX.\n\nIn terms of functionality, I would have liked to see the possibilty of tuning/changing more than 1 hyperparameter for some of the methods, sometimes the interplay of them is very important (e.g. max tree depth and min node size), and having only one of the many hyperparameters configurable is likely to lead to misunderstandings like \"this is the only tuning parameter that matters\" on the side of the students. \n\nAlso:\nPaper needs a spell check (\"aswell\", \"interactivelly\" etc) and should be proof-read by a dilligent native speaker (\"in specific classification\" -> \"specifically classification\", lots of other weird phrases).\n\nAlso:\nCode for the Shiny app should be made public to enable local hosting instead of relying on RStudio's rather limited free hosting.\n\n**Summary:** \nThis is a fairly ambitious project with large potential, not all that well executed. I don't think I would use it in my teaching in its current state, too many frustrating bugs and inconsistencies and too much waiting around on the unresponsive server (although the shiny app could be easily hosted locally if the authors make their code public, see below, and this would presumably speed up computation and rendering quite a bit) \n\n", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 3 |
vU1QL3jGmV_ | vU1QL3jGmV_ | vU1QL3jGmV_ | The Volume of Non-Restricted Boltzmann Machines and Their Double Descent Model Complexity | [
"Prasad Cheema",
"Mahito Sugiyama"
] | NeurIPS.cc 2020 Workshop | 2020 | The double descent risk phenomenon has received much interest in the machine learning and statistics community. Motivated through Rissanen's minimum description length (MDL) principle, and Amari's information geometry, we investigate how a double descent-like behavior may manifest by considering the $\log V$ modeling term - which is the logarithm of the model volume. In particular, the $\log V$ term will be studied for the general class of fully-observed statistical lattice models, of which Boltzmann machines form a subset. Ultimately, it is found that for such models the $\log V$ term can decrease with increasing model dimensionality, at a rate which appears to overwhelm the classically understood $\mathcal{O}(D)$ complexity terms of AIC and BIC. Our analysis aims to deepen the understanding of how the double descent behavior may arise in deep lattice structures, and by extension, why generalization error may not necessarily continue to grow with increasing model dimensionality. | [
"Minimum description length",
"geometric volume",
"Boltzmann machine",
"double descent",
"generalization"
] | https://openreview.net/pdf?id=vU1QL3jGmV_ | @inproceedings{
cheema2020the,
title={The Volume of Non-Restricted Boltzmann Machines and Their Double Descent Model Complexity},
author={Prasad Cheema and Mahito Sugiyama},
booktitle={NeurIPS 2020 Workshop: Deep Learning through Information Geometry},
year={2020},
url={https://openreview.net/forum?id=vU1QL3jGmV_}
} | 1603141806848 | [{"text": "This paper explores a phenomenon of intense recent interest, double descent, where for increasing model complexity we see test error falls, rises, then falls again for extremely over-parametrized models. The paper studies a particular class of lattice models for which some theoretical observations can be made. In particular, defining model complexity in a Bayesian setting, they show that in addition to the number of parameters, an effective volume of distinguishable models plays a role. Moreover, they bound this term and show that for the studied class, limits on the model volume lead to a double descent behavior. \n\nThe introduction to the problem and related work was clear and useful, and the approach taken was interesting and novel. I just got a taste for the approach from this short write-up, but it seems a promising direction I would like to learn more about. In a longer version, I would like to see more development of Eq. 1, as the appearance of the model volume term was not familiar to me. Also, my it seems that the information geometric results were used to derive bounds on the log-volume. I'm wondering if it is possible to define an even simpler class of models where the model volume can be understood more intuitively. Even if such a class did not exhibit double descent, it would help build intuition about this term. ", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 1 |
kvqPFy0hbF | kvqPFy0hbF | kvqPFy0hbF | DIME: An Information-Theoretic Difficulty Measure for AI Datasets | [
"Peiliang Zhang",
"Huan Wang",
"Nikhil Naik",
"Caiming Xiong",
"richard socher"
] | NeurIPS.cc 2020 Workshop | 2020 | Evaluating the relative difficulty of widely-used benchmark datasets across time and across data modalities is important for accurately measuring progress in machine learning. To help tackle this problem, we propose DIME, an information-theoretic DIfficulty MEasure for datasets, based on Fano’s inequality and a neural network estimation of the conditional entropy of the sample-label distribution. DIME can be decomposed into components attributable to the data distribution and the number of samples. DIME can also compute per-class difficulty scores. Through extensive experiments on both vision and language datasets, we show that DIME is well aligned with empirically observed performance of state-of-the-art machine learning models. We hope that DIME can aid future dataset design and model-training strategies. | [
"Dataset understanding",
"Difficulty Measure",
"Information Theory",
"Fano's Inequality",
"Conditional Entropy"
] | https://openreview.net/pdf?id=kvqPFy0hbF | @inproceedings{
zhang2020dime,
title={{DIME}: An Information-Theoretic Difficulty Measure for {AI} Datasets},
author={Peiliang Zhang and Huan Wang and Nikhil Naik and Caiming Xiong and richard socher},
booktitle={NeurIPS 2020 Workshop: Deep Learning through Information Geometry},
year={2020},
url={https://openreview.net/forum?id=kvqPFy0hbF}
} | 1603141806495 | [{"text": "While I applaud the goal of the paper, I have some issues with the currently formulation of the paper.\n\nIt suggests a measure of dataset difficulty which amounts to building an estimate of the mutual information in order to use Fano's inequality to estimate a lower bound on the optimal error rate achieved. Unfortunately the bounds are broken. Fano's inequality gives us a lower bound on the minimum error probability in terms of the conditional probability. We would therefore need to be able to lower bound the conditional entropy to provide a valid bound. Since the conditional entropy is inversely proportional to the mutual information we would therefore need an upper bound on the mutual information. The paper then uses a MINE style estimator the mutual information which itself purports to be a *lower* bound. This bound goes the wrong way in order to accomplish its goal. In addition the style of estimator used here (MINE) has been demonstrated to not actually be a valid bound itself, further complicating the story. (see. e.g. Poole et al. \"On variational bounds of mutual information\")\n\nIf what we wanted was an upper bound on the conditional entropy, we could have just looked at the conditional likelihood of any trained model, as any trained models likelihood provides a valid upper bound on the conditional entropy. As their table demonstrates, their estimator, which is supposed to provide a *lower* bound on the minimum error rate is actually higher than the reported results on all but one of the datasets, which itself has not really attracted much attention. This just goes to show that their estimator is quite bad, and much worse at modelling the distributions than other trained models. I understand the desire to have some model independent notion of hardness for each dataset, but it's not as if the MINE style estimator used here is independent of any modelling assumptions. The fact that the bounds come out so poor is almost surely because a weak model was used to try to estimate what is a hard structured generation task. I would be curious to see what sort of bounds are generated if you use a similarly weak mlp but instead use it to bound the conditional entropy directly with the cross entropy. I suspect it actually works much better than the approach here.\n\nBacking up for a second, even if we had good estimates of the conditional entropy, and therefore assumed we had good estimates of the mutual information, it's a bit unfair to call that a difficulty for the dataset. By this metric a if the input and labels were independent this would be called 'difficult'. It's surely difficult in the sense that the best anyone could ever do at prediction is only ever as good as random guessing, but this doesn't capture the semantic notion of difficulty that I think people want when they discuss datasets.\n\nMinimally, I think the paper needs a substantial rewrite to point out some of these flaws with the method, lest the reader leaves with some misconceptions.", "rating": "3: Clear rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "This paper considers an approach to measure the \"difficulty\" of modeling a dataset. I think the motivation and approach are ill-conceived. However, there may be some value in exploring the proposed measure for other purposes. \n \n- Sample complexity and the complexity of the hypothesis space (\"distributional complexity\") are quite related, this is the heart of statistical complexity theory and ideas like VC dimension. \n\n- The most glaring issue is that in Eq. 3, the most natural estimator for H(y|x) is to lower bound it with the cross entropy. But this reveals the tautological nature of this paper: a difficult problem is one where it is difficult to accurately predict the labels.\n\n- The fact that FakeData is the hardest confirms the problem with the set-up. If you were to try to predict completely random labels, your prediction error is maximal and therefore this task is the \"most difficult\". Yet, in some ways this problem is not difficult at all, as any learner will have the same performance on this task! \n \n- The MINE estimator has a number of issues, other variations are better. See Poole et al. \"On variational bounds of mutual information\"\n\n- I don't believe we can learn much from Table 1, except the extent to which your D-V approximation leads to a sub-optimal classifier compared to SOTA ones. (Note that T in Eq. 5 can be interpreted as a classifier - so you are essentially just doing a prediction with a different form for the loss function.) \n\nAt the end, you mention an idea which might be worth pursuing with the approach: characterizing a per-class difficulty score. I still don't think \"difficulty\" is the right word, but it would at least tell you if there are significant differences between the shapes of the distributions for each class. You could interpret this as measuring the classes that have the most variation in a dataset. In medical domains, for instance, this could be useful, as it might signal that one disease has a lot more variation, and potentially consists of multiple distinct disorders that have not yet been recognized. \n", "rating": "5: Marginally below acceptance threshold", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}] | 2 |
dbCPuJ3wbDl | dbCPuJ3wbDl | dbCPuJ3wbDl | Sniper GMMs: Structured Gaussian mixtures poison ML on large n small p data with high efficacy | [
"Anonymous"
] | NeurIPS.cc 2020 Workshop | 2020 | We propose a method for structured learning of Gaussian mixtures with low KL-divergence from target mixture models that in turn model the raw data. We show that samples from these structured distributions are highly effective and evasive in poisoning training datasets of popular machine learning training pipelines such as neural networks, XGBoost and random forests. Such attacks are especially destructive given the current uptrends towards distributed machine learning with several untrusted client devices that provide their data to servers and cloud service providers for privacy preserving distributed machine learning. In current day and
age of machine learning, Gaussian mixtures are perceived to be an older/classical technique in practice, although they are still actively studied from a theoretical perspective. Therefore it is quite interesting to see that they can be highly effective in performing data poisoning attacks on complex ML pipelines if learned with the right structural constraints. | [
"Structured mixture distribution learning",
"data poisoning",
"modified EM",
"distance correlation",
"Gaussian mixtures"
] | https://openreview.net/pdf?id=dbCPuJ3wbDl | @misc{
anonymous2020sniper,
title={Sniper {GMM}s: Structured Gaussian mixtures poison {ML} on large n small p data with high efficacy},
author={Anonymous},
year={2020},
url={https://openreview.net/forum?id=dbCPuJ3wbDl}
} | 1603141809475 | [{"text": "The topic of the paper is interesting. However, it seems that the paper has been wrapped up quickly and present some errors.\nFor example,\nIn Theorem 2.1: missing notation for distance correlation argmin_Z(X,Z).\n\nIt is important to distinguish between distance between random variables (eg, Mutual information) and distances between distributions (eg, KL).\nIn the proof of 2.1, the Authors use a wrong definition for KL.\nThe paper is not yet ready for communication as it lacks references and need to correct the definition of KL and its usage.\n\n- For structured GMMs, I recommend \nGaussian parsimonious clustering models\nhttps://www.sciencedirect.com/science/article/abs/pii/0031320394001256\n\n- State also that GMMs are universal smooth density approximators\n\n- For outliers contamination, you may be interested by\nRobust parameter estimation with a small bias against heavy contamination\nhttps://www.sciencedirect.com/science/article/pii/S0047259X08000456\n\n- For bounds on KL between GMMs (or any other joint convex statistical distance), we can relax by a Linear Program as described in\nOn The Chain Rule Optimal Transport Distance\nhttps://arxiv.org/abs/1812.08113\n", "rating": "3: Clear rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "In Theorem 2.1, the authors appear to have used KL-divergence for the term h(X,Z)- h(Z) = h(X|Z). Although h(X|Z) is called the relative entropy in information theory (KL-divergence is also called the relative entropy in probability), these two terms are different. To emphasize, h(X|Z) is not equal to KL(p_X || p_Z) = \\int p_X \\log p_Z/p_X. KL divergence does not depend on the joint distribution of (X, Z), only on the marginals. Relative entropy h(X|Z) does depend on the marginals. Please fix this as it is likely to confuse all readers.\n\nThe KL-divergence bound at the bottom of page two is not proved in Appendix B, as claimed in the paper. After some searching, it appears to be in Appendix A. Appendix E only has a title, and appendix G is entirely missing. \n\nThe paper is simply not ready to submit anywhere. Given the confusion regarding KL divergence in the main theorem, I would advise the authors to carefully check if their approach is still valid.\n", "rating": "2: Strong rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 2 |
UsDZut_p2LG | UsDZut_p2LG | UsDZut_p2LG | Estimating Total Correlation with Mutual Information Bounds | [
"Pengyu Cheng",
"Weituo Hao",
"Lawrence Carin"
] | NeurIPS.cc 2020 Workshop | 2020 | Total correlation (TC) is a fundamental concept in information theory to measure the statistical dependency of multiple random variables. Recently, TC has shown effectiveness as a regularizer in many machine learning tasks when minimizing/maximizing the correlation among random variables is required. However, to obtain precise TC values is challenging, especially when the closed-form distributions of variables are unknown. In this paper, we introduced several sample-based variational TC estimators. Specifically, we connect the TC with mutual information (MI) and constructed two calculation paths to decompose TC into MI terms. In our experiments, we estimated the true TC values with the proposed estimators in different simulation scenarios and analyzed the properties of the TC estimators. | [
"Mutual Information",
"Total Correlation",
"Estimation"
] | https://openreview.net/pdf?id=UsDZut_p2LG | @inproceedings{
cheng2020estimating,
title={Estimating Total Correlation with Mutual Information Bounds},
author={Pengyu Cheng and Weituo Hao and Lawrence Carin},
booktitle={NeurIPS 2020 Workshop: Deep Learning through Information Geometry},
year={2020},
url={https://openreview.net/forum?id=UsDZut_p2LG}
} | 1603141808979 | [{"text": "A very nice workshop submission that shows how one can leverage recent advances in mutual information estimation in order to estimate the total correlation.\n\nOnce the total correlation is resolved recursively as a set of mutual informations, simply using recent MI estimators like InfoNCE, MINE, NWJ or CLUB can yield an estimate of the total correlation. This is demonstrated on a simple joint Gaussian problem for which the Total correlation is known exactly.\n\nI have some technical quals with the CLUB estimator (I don't believe it's a valid bound at all), but this doesn't directly affect this work, though the language might be tweaked to only refer to it as an estimator rather than a bound.\n\nVery nice paper, short and sweet, a nice idea and nice simple demonstration it can work.", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}] | 1 |
RoTADibt26_ | RoTADibt26_ | RoTADibt26_ | Likelihood Ratio Exponential Families | [
"Rob Brekelmans",
"Frank Nielsen",
"Alireza Makhzani",
"Aram Galstyan",
"Greg Ver Steeg"
] | NeurIPS.cc 2020 Workshop | 2020 | The exponential family is well known in machine learning and statistical physics as the maximum entropy distribution subject to a set of observed constraints, while the geometric mixture path is common in MCMC methods such as annealed importance sampling (AIS). Linking these two ideas, recent work has interpreted the geometric mixture path as an exponential family of distributions to analyse the thermodynamic variational objective (TVO).
In this work, we extend \textit{likelihood ratio exponential families} to include solutions to RD optimization, the IB method, and recent `` RDC' approaches which combine RD and IB. This provides a common mathematical framework for understanding these methods via the conjugate duality of exponential families and hypothesis testing. Further, we collect existing results to provide a variational representation of intermediate RD or TVO distributions as a minimizing an expectation of KL divergences. This solution also corresponds to a size-power tradeoff using the likelihood ratio test and the Neyman Pearson lemma. In thermodynamic integration bounds such as the TVO, we identify the intermediate distribution whose expected sufficient statistics match the log partition function. | [
"rate-distortion",
"thermodynamic variational objective",
"free energy",
"hypothesis testing",
"legendre duality"
] | https://openreview.net/pdf?id=RoTADibt26_ | @inproceedings{
brekelmans2020likelihood,
title={Likelihood Ratio Exponential Families},
author={Rob Brekelmans and Frank Nielsen and Alireza Makhzani and Aram Galstyan and Greg Ver Steeg},
booktitle={NeurIPS 2020 Workshop: Deep Learning through Information Geometry},
year={2020},
url={https://openreview.net/forum?id=RoTADibt26_}
} | 1603141809639 | [{"text": "This paper considers distributions along a particular interpolating path between two distributions. This family of distributions can be interpreted as an exponential family defined by the likelihood ratio between the two distributions. The authors investigate the prevalence of distributions from this family in various problems of interest including rate-distortion with a logarithmic distortion, the information bottleneck, and a mixture of the two. The authors also investigate problems in hypothesis testing and connect the likelihood exponential family to the optimal error rates. I think this is a good paper, both in the background it provides and the new connections it makes. ", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 1 |
APvrboUZS7w | APvrboUZS7w | APvrboUZS7w | Noisy Neural Network Compression for Analog Storage Devices | [
"Berivan Isik",
"Kristy Choi",
"Xin Zheng",
"H.-S. Philip Wong",
"Stefano Ermon",
"Tsachy Weissman",
"Armin Alaghi"
] | NeurIPS.cc 2020 Workshop | 2020 | Efficient compression and storage of neural network (NN) parameters is critical for resource-constrained, downstream machine learning applications. Although several methods for NN compression have been developed, there has been considerably less work in the efficient storage of NN weights. While analog storage devices are promising alternatives to digital systems, the fact that they are noisy presents challenges for model compression as slight perturbations of the weights may significantly compromise the network’s overall performance. In this work, we study an analog NVM array fabricated in hardware (Phase Change Memory (PCM)) and develop a variety of robust coding strategies for NN weights that work well in practice. We demonstrate the efficacy of our approach on MNIST and CIFAR-10 datasets for pruning and knowledge distillation. | [
"Neural network compression",
"robustness",
"analog storage"
] | https://openreview.net/pdf?id=APvrboUZS7w | @inproceedings{
isik2020noisy,
title={Noisy Neural Network Compression for Analog Storage Devices},
author={Berivan Isik and Kristy Choi and Xin Zheng and H.-S. Philip Wong and Stefano Ermon and Tsachy Weissman and Armin Alaghi},
booktitle={NeurIPS 2020 Workshop: Deep Learning through Information Geometry},
year={2020},
url={https://openreview.net/forum?id=APvrboUZS7w}
} | 1603141809158 | [{"text": "This paper demonstrates weight pruning and distillation for Phase Change Memory (PCM) using MNIST and CIFAR-10 datasets. The paper uses techniques to map the weights to the response of PCM, sign-bit protection and adaptive mapping for small/large weights.\n\nThis is an interesting paper, although its relevance to the present workshop is tenuous. This paper discusses analog storage mechanisms but if one were to think of analog computing, then it would be interesting to develop quantization schemes that incorporate the path of the activations inside the network.", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "This paper explores techniques for compression neural networks weights in a way that works well practically in a PCM.\n\nI appreciate the applied application, I have to admit I didn't know what a PCM was before doing some external reading up on it. Would have helped reach a broader audience to include a bit of introductory text. \n\nOtherwise seems like a reasonable set of things to consider for the specific hardware application in question, and I don't really feel too qualified or knowledgeable to judge beyond that.", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}] | 2 |
U7-z8CD2nYg | U7-z8CD2nYg | U7-z8CD2nYg | Quality Estimation & Interpretability for Code Translation | [
"Mayank Agarwal",
"Kartik Talamadupula",
"Stephanie Houde",
"Fernando Martinez",
"Michael Muller",
"John Richards",
"Steven Ross",
"Justin Weisz"
] | NeurIPS.cc 2020 Workshop | 2020 | Recently, the automated translation of source code from one programming language to another by using automatic approaches inspired by Neural Machine Translation (NMT) methods for natural languages has come under study. However, such approaches suffer from the same problem as previous NMT approaches on natural languages, viz. the lack of an ability to estimate and evaluate the quality of the translations; and consequently ascribe some measure of interpretability to the model’s choices. In this paper, we attempt to estimate the quality of source code translations built on top of the TransCoder model. We consider the code translation task as an analog of machine translation for natural languages, with some added caveats. We present our main motivation from a user study built around code translation; and present a technique that correlates the confidences generated by that model to lint errors in the translated code. We conclude with some observations on these correlations, and some ideas for future work. | [
"machine translation",
"neural machine translation",
"code translation",
"ai for code",
"computer assisted programming",
"translation",
"machine learning",
"natural language processing",
"hci",
"human computer interaction"
] | https://openreview.net/pdf?id=U7-z8CD2nYg | @inproceedings{
agarwal2020quality,
title={Quality Estimation {\&} Interpretability for Code Translation},
author={Mayank Agarwal and Kartik Talamadupula and Stephanie Houde and Fernando Martinez and Michael Muller and John Richards and Steven Ross and Justin Weisz},
booktitle={NeurIPS 2020 Workshop on Computer-Assisted Programming},
year={2020},
url={https://openreview.net/forum?id=U7-z8CD2nYg}
} | 1602617099936 | [{"text": "This paper studies the quality of code translation using a blackbox deep neural network (DNN) model based on the same principles as DNNs used for machine translation. The key challenge is that evaluating the performance of these models in terms of the quality of the code generated (i.e., beyond correctness) is challenging.\n\nTo address this issue, they perform a user study to gain qualitative insights into the performance of the DNN, as well as a quantitative evaluation using a linter to diagnose both style issues as well as code errors. I thought the user study was particularly interesting, and the quantitative insights highlight some of the key challenges in applying machine learning to the programming domain (and possibly more broadly).\n\nTheir key findings include issues such as the presence of \u201cobvious\u201d mistakes, including high confidence ones. They also find that the DNN is not good at producing code in a certain style, though it might be possible to fine-tune it to do so.\n\nOverall, I think the paper studies an important research problem and identifies some interesting insights.\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 1 |
6zafcLROWAd | 6zafcLROWAd | 6zafcLROWAd | SampleFix: Learning to Correct Programs by Efficient Sampling of Diverse Fixes | [
"Hossein Hajipour",
"Apratim Bhattacharyya",
"Mario Fritz"
] | NeurIPS.cc 2020 Workshop | 2020 | Automatic program correction holds the potential of dramatically improving the productivity of programmers. Recent advances in machine learning and NLP have rekindled the hope to eventually fully automate the process of repairing programs. A key challenge is ambiguity, as multiple codes -- or fixes -- can implement the same functionality, and there is uncertainty on the intention of the programmer. As a consequence, datasets by nature fail to capture the full variance introduced by such ambiguities. Therefore, we propose a deep generative model to automatically correct programming errors by learning a distribution over potential fixes. Our model is formulated as a deep conditional variational autoencoder that can efficiently sample diverse fixes for a given erroneous program. In order to account for inherent ambiguity and lack of representative datasets, we propose a novel regularizer to encourage the model to generate diverse fixes. Our evaluations on common programming errors show strong improvements over the state-of-the-art approaches. | [
"Automatic program repair",
"generative models",
"conditional variational autoencoder"
] | https://openreview.net/pdf?id=6zafcLROWAd | @inproceedings{
hajipour2020samplefix,
title={SampleFix: Learning to Correct Programs by Efficient Sampling of Diverse Fixes},
author={Hossein Hajipour and Apratim Bhattacharyya and Mario Fritz},
booktitle={NeurIPS 2020 Workshop on Computer-Assisted Programming},
year={2020},
url={https://openreview.net/forum?id=6zafcLROWAd}
} | 1602617098788 | [{"text": "### Summary ###\nThe paper addresses the problem of bug fixing using a conditional VAE that generates possible fixes, and a compiler that checks each of the suggested fixes. The paper also proposes a regularizer loss to control the diversity of suggested fixes.\n\nSince the main contribution of the paper is its empirical results, I put more weight on the validity of the evaluation. I was not convinced that the used dataset is meaningful and challenging, that the right baselines were used, and that the comparison was fair for the baselines. I thus vote for rejection at this time. I hope that the authors will improve their evaluation and submit this as a full paper later.\n\n### Strengths ###\nThe proposed loss function of taking the two candidates whose distance is the largest (Equation 4) is interesting. \n\n### Weaknesses ###\nThe paper claims \"strong improvements over state-of-the-art\", but I am not sure that the evaluation is correct and fair.\nFor example, did the baselines get to use the same beam size?\nDid all baselines get to use the same number of suggestions (T) and compiler checks? \nDid all baselines have access to a compiler?\nSince the model is relatively simple (LSTM seq2seq) - did all baselines get the same number of layers / LSTM units?\nHow does a simple LSTM seq2seq+attention+copy or a Transformer perform?\n\nThe dataset -\nThe example fixes are mostly trivial to solve. Figures 3-4-5-6 just miss or have an extra closing curly bracket. Also, I am not sure that all the solutions that are marked as \"correct\" with green checkmarks are indeed \"correct\", as they are very different. For example, in Figure 5, why an assignment that is crucial for the correctness of the program can be correctly replaced with a `printf`?.\nFigures 7,9 complete missing variable initializations, Figures 8(a) and Figure 8(b) fix missing semicolons. None of these examples really require machine learning to solve.\n\n### Minor concerns ###\nI am also bothered by the claims of the paper. \nThe authors claim that in previous work \"model is trained to predict a single location and fix for each error\", and that, in contrast, the proposed model learns a \"distribution over potential fixes\".\nDon't all neural models can output a distribution over outputs, and sample or provide multiple candidates?\n\nThe authors also claim that their approach can \"efficiently sample diverse fixes\" - how is this approach more **efficient** than sampling from any other model?\n\nI did not understand how the proposed BMS objective encourages diversity (Equation 3)\n", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 1 |
S4k4ZUsSlqV | S4k4ZUsSlqV | S4k4ZUsSlqV | A Modular Interface for Multimodal Data Annotation and Visualization with Applications to Conversational AI and Commonsense Grounding | [
"Anonymous"
] | graphicsinterface.org 2020 Conference | 2020 | Artificial Intelligence (AI) research, including machine learning, computer vision, and natural language processing, requires large amounts of annotated data.
The current research and development (R&D) pipeline involves each group collecting their own datasets using an annotation tool tailored specifically to their needs, followed by a series of engineering efforts in loading other external datasets and developing their own interfaces, often mimicking some components of existing annotation tools.
We present a modular annotation, visualization, and inference software framework for computational language and vision research.
Our framework enables researchers to set up a web interface for efficiently annotating language and vision datasets, visualizing the predictions made by a machine learning model, and interacting with an intelligent system.
In addition, the tool accommodates many of the standard and popular visual annotations such as bounding boxes, segmentation, landmark points, temporal annotation and attributes, as well as textual annotations such as tagging and free form entry. These annotations are directly represented as nodes and edges as part of the graph module, which allow linking visual and textual information.
Extensible and customizable as required by individual projects, the framework has been successfully applied to a number of research efforts in human-AI collaboration, including commonsense grounding of language and vision, conversational AI, and explainable AI. | [
"HCI",
"explainable AI",
"conversational AI",
"commonsense grounding",
"multimodal annotation",
"language and vision"
] | https://openreview.net/pdf?id=S4k4ZUsSlqV | @misc{
anonymous2020a,
title={A Modular Interface for Multimodal Data Annotation and Visualization with Applications to Conversational {AI} and Commonsense Grounding},
author={Anonymous},
year={2020},
url={https://openreview.net/forum?id=S4k4ZUsSlqV}
} | 1586010690370 | [{"text": "This paper presents an annotation tool to assist the labeling of language or vision datasets as used for machine learning. The claimed contribution of this tool are modularity and, in a way, interoperability.\n\nWhile I can see the merits of a unified tool, I am not sure what is proposed in this paper qualifies as a novel contribution to the field of HCI. The tool may indeed be useful, but the only claim to this comes from the use for some of the authors' own projects -- while this is presented instead as a universal tool addressing the issue of different research labs using different tools. Given this claim, one would expect some external validation, grounded in HCI methodology.\n\nSecondly, I am not sure why there is a problem in the first place. Surely over the past many decades, if indeed the lack of a standardized tool is such a dire situation, there would be attempts at creating such a solution. The motivation for this being a problem is not argued convincingly enough in the paper. One possible approach to this would be again grounded in HCI methodology, such as conducting interviews with current users of annotation tools.\n\nFinally, in my almost 30 years of experience, I have used various tools for these purposes, and none of these was proprietary to the labs where I conducted research. In fact, some annotations tools were provided open source or free from various universities or institutes -- this suggests that there aren't any insurmountable barriers to sharing (and potentially, standardizing) such tools. Given this, it is not clear to me whether the proposed interface actually solves a real problem. This doesn't mean that the proposal tool is not good, but it means that its value-proposition may not be that one claimed by the authors in the abstract.\n\nOne minor point that I would encourage the authors to consider in a subsequent revision of the paper: the \"integration of language and vision in machine learning applications\" is not necessarily new, and more important, it is the other way around, especially for language (it is recently that ML has started being the de facto approach to language processing).\n\nOverall, in my view, this paper may be of interest to the community in terms of becoming aware of a potentially useful tool. However, there is not enough in terms of research contribution for a full-length paper (a demo or poster may be more suitable).", "rating": "3: Clear rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"text": "I\u2019d like to thank the authors for submitting this work to GI2020. The paper describes a comprehensive data annotation tool that has components and features for language, vision, and relations labeling that are customizable and stitchable to suit various labeling needs.\n\nTaking a perspective of technical HCI systems research, I focus on design rationales and evaluation of the newly proposed tool. While the technical and engineering effort involved is not getting unnoticed, I would like to raise a number of issues to contextualize this research to help the readers understand the contribution more clearly. \n\nI find it difficult to position this work. There\u2019s a specific area with much history and a large body of work in microtasking, crowdsourcing and crowd workflow that focus on how to annotate \"well\". I suggest the authors consult this area to strengthen the narrative around \u201clabeling task\u201d component of the work. \n(for example, Michael Berstein\u2019s group and Dan Weld\u2019s group have a long list of publications that the authors can reference - apologize for namedropping, but there are just too many relevant papers to list.)\n\nThe general direction suggests the tool as a \u201chuman-ai collaboration\u201d tool, Does it mean the authors want this tool to be seen as a machine-assisted labeling tool? That is a very specific type of human-ai collaboration, and is not the first thing that pops into my mind when I hear \"human-ai collaboration\". The use of terminology can be more concrete and specific throughout. Similarly, \"XAI\" also is a very catchy term, but can mean multiple things in academic discourse. \n\nThe first sentence in the abstract, \u201cArtificial Intelligence (AI) research, including machine learning, computer vision, and natural language processing, requires large amounts of annotated data.\u201d is false. There are specific approaches (possibly suitable for specific problems) that researchers take that require annotated data. There are other approaches in reinforcement learning, self-supervised learning, probabilistic programming that do not need labels to work. It\u2019s an incorrect claim to argue the whole field \u201crequires\u201d annotated data. I urge the author to correct this sentence. \n\nWhile the authors claim their tool facilitates \u201cefficient annotation\u201d, the evidence isn\u2019t to be found in the submission. What evaluations were done to prove its efficiency? And what other tools are considered baseline, and other candidates?\n\nI find the biggest potential impact of this system to be the synergy of \u201cone place for all\u201d label brings(i.e. Figure 8) However, the use cases (Application section) list a number of different applications the system can be / has been used for without really demonstrating what clear benefits there were over other possible options. \n\nThe current submission is a good presentation of what the system can do. To help the readers understand the contribution better, I would like the authors to highlight the \u201carguments\u201d to why certain design choices of the component were meaningful, why it makes the tool \u201cmore usable\u201d than others, or \u201cwhat benefits\u201d the tool brings to the users that were previously difficult to attain. \n", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"text": "The paper presents a modular annotation, visualization, and inference software combining machine learning approaches from computational language and vision research. The software leverages ML visualization approaches as a canvas for annotating events. This is a timely topic considering the need for high quality annotation and need for convenient systems that make use of multi-model approaches. While aspects of the presented system have been presented in previous software, the presented system is novel in the completeness and maturity of the development. I believe that this work has potential to support many who use annotated video data as a source for their research or product. \n\nThe paper is well written and structured and provides a sufficient level of detail to follow the approach taken. A section on the limitations and future work would have been appreciated. \n", "rating": "9: Top 15% of accepted papers, strong accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 3 |
KyMw0p9rWL | KyMw0p9rWL | KyMw0p9rWL | Gaggle: Visual Analytics for Model Space Navigation | [
"Subhajit Das",
"Dylan Cashman",
"Remco Chang",
"Alex Endert"
] | graphicsinterface.org 2020 Conference | 2020 | Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems. However, while multi-model visual analytic systems can be effective, their added complexity poses usability concerns, as users are required to interact with the parameters of multiple models. Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from. This poses complexity to navigate this model space to find the right model for the data and the task. In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space. Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks. Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task. The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and automated model selection without requiring any technical expertise from users. | [
"visual analytics",
"interactive machine learning",
"classification",
"ranking"
] | https://openreview.net/pdf?id=KyMw0p9rWL | @inproceedings{
das2020gaggle,
title={Gaggle: Visual Analytics for Model Space Navigation},
author={Subhajit Das and Dylan Cashman and Remco Chang and Alex Endert},
booktitle={Graphics Interface 2020},
year={2020},
url={https://openreview.net/forum?id=KyMw0p9rWL}
} | 1576924737361 | [{"text": "The authors present a new visual analytic system called Gaggle, which aims to enable non-expert users to interactively navigate a model space by using a demonstration-based approach. An evaluation with 22 non-experts support the claim to simplify the complex model and hyperparameter search by using such an interaction paradigm. \n\nThe system is well motivated, its structure is sufficiently described and the overall paper is well written. \nHowever, some open questions and comments remain:\n\n1) The usage scenario is helpful to better understand the application of Gaggle.\nHowever, the difference between the scenario and the example data presented in Figure 1 makes it unnecessary complicated to understand the described scenario in context. I recommend the authors to align these two to improve readability. \n\n2) The last paragraph of the usage scenario is not clear. I recommend to reformulate and clarify this part of the paper. \n\n3) The authors claim that the presented \u201ctechnique guards against possible model overfitting incurred due to adjusting the models confirm to specified user preferences.\" (p.2) However, the authors declare later that the risk of overfitting is high with such aggressive model space search approaches like used in Gaggle. While the authors argue further that \u201c overfitting is less problematic\u201d in an exploratory context, it would strengthen the contribution to discuss potential solutions to this common issue. \n\n4) The author describe the aim of Gaggle as to help users to explore data and gain insights. \nHowever, the process described rather helps users to faster or more accurate build a model that produce intended outcome, similar to active learning approaches. \nThe contribution would benefit from a reflection and discussion on the active training vs. exploration trade-off. \nThis could take the form of a more detailed related work analysis regarding active learning and similar approaches and also as part of a larger discussion about the benefits of the presented approach over them. \n\n5) Regarding the presented model, two main question occur: 1) How does the model acts if the users selection is not coherent? \nIn the paper it is described that \u201cif a feature satisfies one interaction but fails on another, they are left out. Only the common features across interacted items get selected. The set of selected features Fs are then used to build the random forest model\u201d. While no or a very small set of common features might represent an edge case, it is still important to evaluate the robustness and generalizability of the model. \n2) The weight selection is described as \u201cThe weights are set based on the model accuracy on various datasets.\u201d It would help the reader if the author could elaborate on this aspect and would make the presented approach more replicable by the research community. \n\n6) I encourage the authors to an elaborated discussion of the potential generalizability to other models, contexts and real world scenarios. The current study takes a rather small dataset and exclusively random forest algorithms as an example case, which is quite limited in its application. \nTo open the contribution of this work to a larger audience, a discussion should include details about necessary changes, limitations of applicability and Gaggle's potential over known approaches.\n\n7) The qualitative evaluation should be described in more detail. This would include which likert scale questions were asked, their results and what did other participants report to present a more comprehensive picture of the overall results (currently only 6/22 referenced).\n\n\nI encourage the authors to consider the above mentioned comments to improve their submission, especially regarding the difference to other active learning approaches as well as the generalizability to other models and scenarios.\nIn conclusion, the authors present an interesting approach to help non-experts in ML to consider a diverse set of model parameters, without the burden of setting them manually. The system is well designed for the use case and the study reflects its applicability in this case. \n\nTherefore, I recommend to rather accept this paper, under the condition that the before mentioned comments are considered and addressed. \n\n\n\nSpelling mistakes:\n- p.2: domainstration-based\n- p.9: might require different different model", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "This paper presents Gaggle, a visual analytics system that helps novice analysts navigate model space in performing classification and ranking tasks. The system has many features and is probably useful and effective. But there is not much contribution in terms of the visual analytics research or understanding how humans use these types of systems. \n\nThere is no doubt in my mind that a lot of work and thoughts have gone into the development of this system. However, mixed initiative systems have been studied for quite a long time. There seems not sufficient novelty in terms of the technical contribution or visualization design in this paper.\n\nFirst, it is unclear about the effectiveness of the proposed Bayesian based model searching technique. Auto ML has been a hot topic in the machine learning community, e.g., https://sites.google.com/site/automlwsicml14/. This paper does not compare their approach with any other existing methods. It is not convincing that there exists sufficient novelty or contribution. It is also unclear if the proposed method works for navigating any ML model space (e.g., SVM, neutral networks) or just Random Forests (as described in the paper). If this is a limitation of the method, it needs to be discussed. \n\nMoreover, it is not clear who are the end users of Gaggle and whether Gaggle is useful in real world. The presentation of usage scenario is nice. However, it does not come from a real-world use case and I can hardly imagine how Gaggle would contribute to analytical process. It would be fine if the authors collect requirements from target users. The design goals seem to be distilled without involving end users in the loop. This would be okay if there was an insightful section on how human users would interact with such a system based on real user interviews. But the evaluation just uses standard techniques to confirm the usability of this system. \n", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"text": "In this submission the authors describe Gaggle, a system that takes input from user to facilitate model space navigation in VA contexts. The paper is overall well written (a couple of typos here and there, including some I will report below) and the topic is relevant to GI and the visualization community.\n\nWhile I was not an expert at all in this domain, I found the paper relatively easy to follow and understand. Not being an expert, I cannot judge whether or not all appropriate previous approaches are cited, but I trust that other reviewers would be able to point out missing references if there are any.\n\nI would overall argue that the work should be accepted provided that the authors can address my (relatively small) concerns (and the ones from other reviewers). I will list my concerns and questions below.\n\nGiven the scalability issue that the authors currently highlight, it seems that such a complex system might be an overkill for small datasets. Especially in the way they write about this limitation. I would argue that the authors should somehow justify that their system can be useful in real scenarios despite that limitation and give concrete examples to avoid leaving the reader with this feeling. This is currently my main concern about the submission and the reason why I put a rating a bit lower than 7.\n\nIt would be nice to have access to the full set of questions that were asked during the semi-structured interviews, as well as the likert scale questions that were given to participants after each trial. Currently the likert-scale results do not make much sense without being able to see what questions were asked. I overall found the qualitative evaluation to be not correctly reported. \n\nLinked to this, I would argue that the datasets used by the authors do not seem very interesting or complicated. I, so far, fail to see why users would need to use Gaggle to make use of this data. I don\u2019t know if these datasets are representative of the datasets that the authors envision for Gaggle, and I surely hope that they are not, but in this case I would argue that the authors should properly justify why they chose these specific datasets. This is currently missing and it hinders the work that the authors have conducted and reported on previously. \n\n\nTypos:\nPage 3, second column \u201cThe found\u201d \u2192 \u201cThey found\u201d\nPage 8 \u201cIn future\u201d \u2192 \u201cIn future work\u201d\n", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}] | 3 |
6w-Vom-SQZ0 | 6w-Vom-SQZ0 | 6w-Vom-SQZ0 | The Impact of Presentation Style on Human-In-The-Loop Detection of Algorithmic Bias | [
"Po-Ming Law",
"Sana Malik",
"Fan Du",
"Moumita Sinha"
] | graphicsinterface.org 2020 Conference | 2020 | While decision makers have begun to employ machine learning, machine learning models may make predictions that bias against certain demographic groups. Semi-automated bias detection tools often present reports of automatically-detected biases using a recommendation list or visual cues. However, there is a lack of guidance concerning which presentation style to use in what scenarios. We conducted a small lab study with 16 participants to investigate how presentation style might affect user behaviors in reviewing bias reports. Participants used both a prototype with a recommendation list and a prototype with visual cues for bias detection. We found that participants often wanted to investigate the performance measures that were not automatically detected as biases. Yet, when using the prototype with a recommendation list, they tended to give less consideration to such measures. Grounded in the findings, we propose information load and comprehensiveness as two axes for characterizing bias detection tasks and illustrate how the two axes could be adopted to reason about when to use a recommendation list or visual cues. | [
"algorithmic bias",
"machine learning fairness",
"lab study"
] | https://openreview.net/pdf?id=6w-Vom-SQZ0 | @inproceedings{
law2020the,
title={The Impact of Presentation Style on Human-In-The-Loop Detection of Algorithmic Bias},
author={Po-Ming Law and Sana Malik and Fan Du and Moumita Sinha},
booktitle={Graphics Interface 2020},
year={2020},
url={https://openreview.net/forum?id=6w-Vom-SQZ0}
} | 1586010679825 | [{"text": "This paper describes a lab study with 16 participants that investigate the effect of presentation style (recommendation list or visual cues) on user behaviors in reviewing algorithmic bias reports. Through this study, the authors provided guidance in the design of semi-automated bias detection tools. \n\nThis paper addresses a timely and important topic. I think the hybrid (qualitative and quantitative) method the authors chose is not the easiest choice but the authors executed it very well. The paper is thoughtfully written and very easy to follow. I also appreciate that the resulting design guidelines can potentially generalize to many critical AI application domains beyond hiring. The outlined design space (Figure 4) could serve as a valuable instrument for designers and researchers working in visualizations and information design for ML outputs.\n\nI do have two critiques, specifically concerning 1) the definition of \u201calgorithmic fairness/bias\u201d and 2) novelty of the findings around information overload/comprehensiveness tradeoff.\n\nI think the paper would benefit from a clearer definition of algorithmic biases. This paper focuses almost exclusively on the *outputs* of algorithmic systems (e.g. accuracy disparity, classification rate, etc.). Algorithmic bias and/or biases in the training data can result in biased system outputs. In this sense, I suspect what the authors meant by algorithmic biases is actually biases in system outputs (including intrinsic biases in training data). Prior work addresses these two kinds of biases quite differently [1].\n\nThe other opportunity for improvement is in articulating the novelty of the design guidelines more explicitly. One way to achieve so can be adding a section in the Related Work on related existing data visualization and information design research (e.g. [2] and many more), which could help frame and highlight the novelty of this paper\u2019s findings.\n\n[1] Consider Microsoft\u2019s data card (https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/working-with-cards) versus modeling card (https://arxiv.org/pdf/1810.03993.pdf)\n[2] Designing Theory-Driven User-Centric Explainable AI, CHI\u201919 https://dl.acm.org/doi/pdf/10.1145/3290605.3300831\n", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"text": "The paper examines different visual representations of algorithmic biases and how they affect the behavior of detection. \nBiases of ML algorithms are of increasing interest in the HCI due to a growing number of decision support systems in everyday processes. This problem is well-motivated and sufficiently outlined.\n\nTwo prototypes were developed to investigate this bias. \nThese were well designed and seem sufficient to investigate the problem at hand.\nThe paper introduces guidelines for designing bias-detection interfaces based on the comprehensiveness and information load necessary. This analysis further reveals current research gaps in the context of bias investigation tools.\n\nRecommendations for minor improvements: \n- The introduction as well as the discussion feels in some parts repetitive.\n- Figure 1 and 2 are hard to read as a printed version. \n- The caption of Figure 1 does not match the figure.\n- Typo: 'For example, A model may' (page 4)\n\nFinal comments:\nThe paper is informative, addresses a very timely topic and is well conducted. \nThe final results open new opportunities for future research.\nHence, I would recommend to accept this paper.", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"text": "This work investigates interface presentation styles (visual clues and recommendation lists) for auditing group biases in machine learning (ML) models. Through an in-lab within-subject study with 16 ML engineers, the authors evaluate performance measures while using the two types of interfaces. The paper contributes to interface \"design dimensions\" for bias detection and auditing tasks, i.e., information load, and comprehensiveness. \n\nCreating usable bias detection tools is an important and urgent problem in ML research. I commend the authors for taking on this problem. \n\nOverall this paper is very well written with adequate details about motivation and study design. The related work cites many key papers and does a good job of synthesizing prior literature to situate this work. The findings from this study (design dimensions) offer interesting insights for future research on auditing tools.\n\nHowever, I do have a few concerns about the design of the interfaces used in the study. I find that overall, the interface design lacks justification about how it affords/supports different types of auditing tasks. For example, I see that end-users need to scroll quite a bit to compare measures across different sub-groups. This may have influenced the number of measures they select. It would also be nice to synthesize insights based on different sub-tasks for bias auditing (even just looking at the \"foraging\" and \"sensemaking\" tasks). \n\nFurther, the highlighting feature in the visual cues interface is not salient enough (both in the video and the screenshots in the paper are hard to see). And in the recommendation list interface, the \"see all\" option is not discoverable. As the authors reported, they had to remind participants to click on the group name to see all measures. This confounds the results to some extent. I would have preferred an *always visible* button instead of clicking on the group header. \n\nThe time for each prototype was set at 10 minutes (from a few hours in the pilot). A sentence about this choice might be helpful. Additionally, better measures on cognitive load (e.g., NASA TLX, CLS questionnaire) could strengthen the study findings. \n\nIn summary, while there are some flaws in the study, the results are useful and provide directions for future research. I advise the authors to discuss the above-mentioned limitations of the paper. I recommend accepting this paper for publication. ", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 3 |
l3923_BJIAN | l3923_BJIAN | l3923_BJIAN | Language-Goal Imagination to Foster Creative Exploration in Deep RL | [
"Tristan Karch",
"Nicolas Lair",
"Cédric Colas",
"Jean-Michel Dussoux",
"Clément Moulin-Frier",
"Peter Ford Dominey",
"Pierre-Yves Oudeyer"
] | ICML.cc 2020 Workshop | 2020 | Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills. Children are known to use language and its compositionality as a tool to imagine descriptions of outcomes they never experienced before and target them as goals during play. We introduce IMAGINE, an intrinsically motivated deep RL architecture that models this ability. Such imaginative agents, like children, benefit from the guidance of a social peer who provides language descriptions. To take advantage of goal imagination, agents must be able to leverage these descriptions to interpret their imagined goals. This generalization is made possible by modularity: a decomposition between learned goal-achievement reward function and policy relying on deep sets, gated attention and object-centered representations. We introduce the Playground environment and study how this form of goal imagination improves generalization and exploration over agents lacking this capacity. | [
"Exporation",
"Natural Language",
"Reinforcement Learning",
"Deep Learning"
] | https://openreview.net/pdf?id=l3923_BJIAN | @inproceedings{
karch2020languagegoal,
title={Language-Goal Imagination to Foster Creative Exploration in Deep {RL}},
author={Tristan Karch and Nicolas Lair and C{\'e}dric Colas and Jean-Michel Dussoux and Cl{\'e}ment Moulin-Frier and Peter Ford Dominey and Pierre-Yves Oudeyer},
booktitle={Language in Reinforcement Learning Workshop at ICML 2020},
year={2020},
url={https://openreview.net/forum?id=l3923_BJIAN}
} | 1591975449532 | [] | 0 |
mgpPKjaV4k0 | mgpPKjaV4k0 | mgpPKjaV4k0 | Does imputation matter? Benchmark for real-life classification problems. | [
"Katarzyna Woźnica",
"Przemyslaw Biecek"
] | ICML.cc 2020 Workshop | 2020 | Incomplete data are common in practical applications. Most predictive machine learning models do not handle missing values so they require some preprocessing. Although many algorithms are used for data imputation, we do not understand the impact of the different methods on the predictive models' performance. This paper is first that systematically evaluates the empirical effectiveness of data imputation algorithms for predictive models. The main contributions are (1) the recommendation of a general method for empirical benchmarking based on real-life classification tasks and the (2) comparative analysis of different imputation methods for a collection of data sets and a collection of ML algorithms. | [
"imputation methods",
"benchmark"
] | https://openreview.net/pdf?id=mgpPKjaV4k0 | @inproceedings{
wo{\'z}nica2020does,
title={Does imputation matter? Benchmark for real-life classification problems.},
author={Katarzyna Wo{\'z}nica and Przemyslaw Biecek},
booktitle={ICML Workshop on the Art of Learning with Missing Values (Artemiss)},
year={2020},
url={https://openreview.net/forum?id=mgpPKjaV4k0}
} | 1591644894365 | [{"text": "Several methods are discarded even though they are methods that have shown good performance in many situations.\nMoreover, we do not know why some methods are better than other on certain datasets. It is very surprising that average and random are the 2 most interesting methods.\nThe authors should give insights on which methods to use according to the circumstances or the kind of dataset.", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 1 |
_wVkmK7wBhW | _wVkmK7wBhW | _wVkmK7wBhW | Multi-label Learning with Missing Values using Combined Facial Action Unit Datasets | [
"Jaspar Pahl",
"Ines Rieger",
"Dominik Seuss"
] | ICML.cc 2020 Workshop | 2020 | Facial action units allow an objective, standardized description of facial micro movements which can be used to describe emotions in human faces. Annotating data for action units is an expensive and time-consuming task, which leads to a scarce data situation. By combining multiple datasets from different studies, the amount of training data for a machine learning algorithm can be increased in order to create robust models for automated, multi-label action unit detection. However, every study annotates different action units, leading to a tremendous amount of missing labels in a combined database. In this work, we examine this challenge and present our approach to create a combined database and an algorithm capable of learning under the presence of missing labels without inferring their values. Our approach shows competitive performance compared to recent competitions in action unit detection. | [
"Action Unit Detection",
"Missing Labels",
"Multi Label",
"Deep Learning",
"Affective Computing"
] | https://openreview.net/pdf?id=_wVkmK7wBhW | @inproceedings{
pahl2020multilabel,
title={Multi-label Learning with Missing Values using Combined Facial Action Unit Datasets},
author={Jaspar Pahl and Ines Rieger and Dominik Seuss},
booktitle={ICML Workshop on the Art of Learning with Missing Values (Artemiss)},
year={2020},
url={https://openreview.net/forum?id=_wVkmK7wBhW}
} | 1591807937827 | [{"text": "The authors explore the missing data problem in the context of Facial action analysis.\n\nFacial action analysis is basically a multi-label classification problem where features are images of faces, and labels are facial micro movements. \n\nThe goal of the authors is to combine a lot of combined facial action unit datasets in order to train a single deep model on them. One big issue is that different data sets may look at different facial micro movements, which can be seen as a missing data problem.\n\nThe authors describe the problem quite clearly, and show that simply ignoring the missing values while training can lead to a very competitive model.\n\nI think that the problem is very interesting, and certainly deserves to be discussed at this workshop. My main issue is that I don't fully understand what's the loss function used for training. Indeed, it looks like the authors suggest to use a loss based on the F1 score. But the F1 score is not a differentiable loss function, similarly to the 0-1 loss but unlike, e.g. cross-entropy. This should definitely be clarified in the final version.\n\nDiscussing under which missingness assumptions the approach suggested makes sense (e.g. missing completely at random) would be interesting.\n\n", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}] | 1 |
ZSLXyrpQHwl | ZSLXyrpQHwl | ZSLXyrpQHwl | Predicting Feature Imputability in the Absence of Ground Truth | [
"Niamh McCombe",
"Xuemei Ding",
"Girijesh Prasad",
"David P Finn",
"Stephen Todd",
"Paula L McClean",
"Kongfatt Wong-Lin"
] | ICML.cc 2020 Workshop | 2020 | Data imputation is the most popular method of dealing with missing values, but in most real life applications, large missing data can occur and it is difficult or impossible to evaluate whether data has been imputed accurately (lack of ground truth). This paper addresses these issues by proposing an effective and simple principal component based method for determining whether individual data features can be accurately imputed - feature imputability. In particular, we establish a strong linear relationship between principal component loadings and feature imputability, even in the presence of extreme missingness and lack of ground truth. This work will have important implications in practical data imputation strategies. | [
"Missing Data",
"Data Imputation",
"Feature Imputability",
"Machine Learning",
"PCA",
"NIPALS"
] | https://openreview.net/pdf?id=ZSLXyrpQHwl | @inproceedings{
mccombe2020predicting,
title={Predicting Feature Imputability in the Absence of Ground Truth},
author={Niamh McCombe and Xuemei Ding and Girijesh Prasad and David P Finn and Stephen Todd and Paula L McClean and Kongfatt Wong-Lin},
booktitle={ICML Workshop on the Art of Learning with Missing Values (Artemiss)},
year={2020},
url={https://openreview.net/forum?id=ZSLXyrpQHwl}
} | 1591719977842 | [{"text": "Summary: The authors propose a method to tackle the problem of feature imputability where the goal is to understand which features are most accurately imputable based on all other features in the data. This problem is interesting and aligns with the goals of this workshop. They show that using a PCA based imputation strategy, they can regress per-variable imputation performance on the first principal component loadings and this shows a linear relationship.\n\nStrengths:\n- The idea is interesting and deserves more exploration\n- A number of commonly used imputation strategies are used \n\nWeaknesses:\n- The methods of the paper need to be made clearer (for example, there is no table with a ranking of which features are most or least imputable in the dataset, why was the proposed missingness simulation approach used, how is imputation performance measured without a gold standard etc.)\n- It is not immediately clear to me how one would decide what to do after obtaining such a ranking of feature imputability, what constitutes a \u201clow enough\u201d score to justify removing the variable? Perhaps it is worth it to think about a hypothesis test for this?\n- Only one dataset is used where PC1 contains a majority of the variance, this is not always the case for many data sources\n- What happened to age and gender in the regression?\n\nFurther Questions: \n- If a variable has low feature imputability, does that necessarily mean it should be removed? In the case that the missingness pattern is MCAR and the feature is valuable for predicting the outcome, removing it could degrade performance. Even if missingness is MNAR, it could still be worth preserving in a model to trade of predictive performance for poor inference.\n- It may be worth thinking about nonlinear PCA and also test on datasets where many PCs are required to recover a good low rank approximation. \n\n\nOverall, the authors present an interesting direction. The methodology of the paper could be made much clearer esp questions outlined above. More thought has to be put into actionable insights derived from the output of such a ranking approach. \n\n", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "- There is better principal component methods than the NIPALS algorithm to impute with PCA (see the missMDA package).\n- There is probably a bias in your methodology because your criterion is based on PC methods and you find that the best imputation methods are based on PC methods. Maybe if you consider a criterion based on regression, the imputation methods based on regression would be better. You should at least discuss this point.", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 2 |
R4w3PTkCD4 | R4w3PTkCD4 | R4w3PTkCD4 | Working with Deep Generative Models and Tabular Data Imputation | [
"Ramiro Camino",
"Christian Hammerschmidt",
"Radu State"
] | ICML.cc 2020 Workshop | 2020 | Datasets with missing values are very common in industry applications.
Missing data typically have a negative impact on machine learning models.
With the rise of generative models in deep learning, recent studies proposed solutions to the problem of imputing missing values based various deep generative models.
Previous experiments with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) showed promising results in this domain.
Initially, these results focused on imputation in image data, e.g. filling missing patches in images.
Recent proposals addressed missing values in tabular data. For these data, the case for deep generative models seems to be less clear.
In the process of providing a fair comparison of proposed methods, we uncover several issues when assessing the status quo: the use of under-specified and ambiguous dataset names, the large range of parameters and hyper-parameters to tune for each method, and the use of different metrics and evaluation methods. | [
"deep learning",
"imputation",
"tabular data"
] | https://openreview.net/pdf?id=R4w3PTkCD4 | @inproceedings{
camino2020working,
title={Working with Deep Generative Models and Tabular Data Imputation},
author={Ramiro Camino and Christian Hammerschmidt and Radu State},
booktitle={ICML Workshop on the Art of Learning with Missing Values (Artemiss)},
year={2020},
url={https://openreview.net/forum?id=R4w3PTkCD4}
} | 1591832596470 | [{"text": "In this paper the authors discuss several issues that arise when comparing deep generative models for missing data imputation from recent research. The authors give examples from recent papers indicating issues regarding dataset identification, inconsistent usage of metrics, hyperparameter selection, etc.\n\nMost of the raised issues can be resolved by releasing source code for papers and making sure that all details necessary for reproducing experiments are given in a paper. In that sense, the findings of the paper do not only apply to papers on topics around missing data but to any machine learning paper. While the authors are certainly correct in their observations, the paper would have been much stronger if they had highlighted the impact of these issues in a few examples. Nevertheless, the authors raise points that we should always have in the back of our heads when writing ML papers.\n\nMinor remark:\nIs there a better title for section 3?", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 1 |
P0DL7M6T57o | P0DL7M6T57o | P0DL7M6T57o | Path Imputation Strategies for Signature Models | [
"Michael Moor",
"Max Horn",
"Christian Bock",
"Karsten Borgwardt",
"Bastian Rieck"
] | ICML.cc 2020 Workshop | 2020 | The signature transform is a 'universal nonlinearity' on the space of continuous vector-valued paths, and has received attention for use in machine learning. However real-world temporal data is typically discretised, and must first be transformed into a continuous path before signature techniques can be applied. We characterise this as an imputation problem, and empirically assess the impact of various imputation techniques when applying signatures to irregular time series data. In our experiments, we find that the choice of imputation drastically affects shallow signature models, whereas deeper architectures are more robust. We also observe that uncertainty-aware predictions are overall beneficial, even compared to the uncertainty-aware training of Gaussian process (GP) adapters. Hence, we propose an extension of GP adapters by integrating uncertainty to the prediction step. This leads to competitive performance in general, and improves robustness in signature models in particular. | [
"signature models",
"path imputation strategies",
"imputation strategies",
"signature transform",
"nonlinearity",
"space",
"continuous",
"paths",
"attention",
"use"
] | https://openreview.net/pdf?id=P0DL7M6T57o | @inproceedings{
moor2020path,
title={Path Imputation Strategies for Signature Models},
author={Michael Moor and Max Horn and Christian Bock and Karsten Borgwardt and Bastian Rieck},
booktitle={ICML Workshop on the Art of Learning with Missing Values (Artemiss)},
year={2020},
url={https://openreview.net/forum?id=P0DL7M6T57o}
} | 1589917733361 | [{"text": "Summary:\nDiscrete time series interpolation to obtain continuous paths is treated as a path imputation problem. The work proposes Gaussian process adapter based imputation strategy that shows improved performance on shallow signature transform models. \n\nStrengths:\n+ This is an interesting contribution that could be applied to several time series problems where feature extraction using signature transforms could be used. \n+ The paper is clearly written \n+ Experiments are well done and highlight the contributions\n\nWeaknesses:\n- However, presentation and discussion of the results are unconvincing. Lacks clarity. \n- A plot comparing the performance of different models with the proposed GP-PoM imputation strategy could have been useful. This information is present in Figure 2 but a focused discussion can be useful.\n- Is the conclusion that GP-PoM useful only when using shallow models (like SIG)? I think this is mentioned but not tied to the presented results.\n- There is no discussion surrounding the figure with number of parameters making it hard to appreciate. Why does the DeepSig model with GP-PoM model fewer parameters than when using other imputations? ", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}] | 1 |
77BShu7ITSh | 77BShu7ITSh | 77BShu7ITSh | A Study on Intentional-Value-Substitution Training for Regression with Incomplete Information | [
"Takuya Fukushima",
"Tomoharu Nakashima",
"Taku Hasegawa",
"Vicenç Torra"
] | ICML.cc 2020 Workshop | 2020 | This paper focuses on a method to train a regression model from incomplete input values. It is assumed in this paper that there are no missing values in a training data set while missing values exist during a prediction phase using the trained model. Under this assumption, Intentional-Value-Substitution (IVS) training is proposed to obtain a machine learning model that makes the prediction error as minimum as possible. Through a mathematical analysis, it is shown that there are some meaningful substitution values in the IVS training for the model. It is shown through a series of computational experiments that the substitution values estimated by the extended mathematical analysis help the models predict outputs for inputs with missing values even though there is more than one missing value. | [
"Machine Learning",
"Missing Value",
"Neural Network"
] | https://openreview.net/pdf?id=77BShu7ITSh | @inproceedings{
fukushima2020a,
title={A Study on Intentional-Value-Substitution Training for Regression with Incomplete Information},
author={Takuya Fukushima and Tomoharu Nakashima and Taku Hasegawa and Vicen{\c{c}} Torra},
booktitle={ICML Workshop on the Art of Learning with Missing Values (Artemiss)},
year={2020},
url={https://openreview.net/forum?id=77BShu7ITSh}
} | 1591809892235 | [{"text": "**Summary**:\nThe authors consider a regression problem where values are missing in the test set but not in the training set. They consider a previously published method called Intentional-Value-Substitution (IVS). At train time, this method replaces known values from the training set by new values chosen so as to minimise the difference between the conditional expectation of the target function with respect to p(x_mis|x_obs) and the target function evaluated at these new values. IVS thus needs knowledge of the true target function. Previous work has proposed an algorithm to overcome this limitation when at most one variable has missing entries in the test set. In this work, the authors extend this algorithm to the case of two variables with missing entriers in the test set.\n\n**Pros**:\n* The problem of supervised learning with missing values is an interesting one, which has received little attention so far.\n\n**Cons**:\n* The method proposed applies to quite restrictive settings: no missing values in the train set, problems with at most 2 variables with missing entries in the test set, at least one variable fully observed, all variables independent.\n* More insights to explain the results of the experiments would be useful. See questions.\n\n**Questions**:\n* It would be nice to clearly state how the test points are handled. At test time, the missing entries are imputed with the values computed at train time according to the region (x_1i, x_1(i+1)] it falls into? \n* Figure 2 shows that substituting values from the train set by 0 performs as well as IVS. But it is not the case in Figure 1. Comments on that would be interesting, because it would highlight when the simple procedure of replacing values by 0 is sufficient and when it is not.\n* It seems that the probability of substitution (say between 0.25 and 0.9) in the train set has little impact on the performance (for Theory, Theory random and Estimation at least). This is surprising to me. How high should the probability of substitution be to degrade performances?\n* To obtain eq. 19, you use an equality, but how is this equality obtained? \\psi\u2019_2(x_1, x_3) is defined as an argmin, but I don\u2019t see why it would imply the equality used to obtain eq.19 \n\n**Remarks**:\n* The difference between Theory and Theory random is not clear to me.\n", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}] | 1 |
0IXcmQIAJPt | 0IXcmQIAJPt | 0IXcmQIAJPt | Handling Missing Data in Decision Trees: A Probabilistic Approach | [
"Pasha Khosravi",
"antonio vergari",
"YooJung Choi",
"Yitao Liang",
"Guy Van den Broeck"
] | ICML.cc 2020 Workshop | 2020 | Decision trees are a popular family of models due to their attractive properties such as interpretability and ability to handle heterogeneous data. Concurrently, missing data is a prevalent occurrence that hinders performance of machine learning models.
As such, handling missing data in decision trees is a well studied problem. In this paper, we tackle this problem by taking a probabilistic approach. At deployment time, we use tractable density estimators to compute the "expected prediction'' of our models.
At learning time, we fine-tune parameters of already learned trees by minimizing their "expected prediction loss'' w.r.t.\ our density estimators. We provide brief experiments showcasing effectiveness of our methods compared to few baselines. | [
"missing data",
"decision trees",
"probabilistic reasoning",
"probabilistic circuits"
] | https://openreview.net/pdf?id=0IXcmQIAJPt | @inproceedings{
khosravi2020handling,
title={Handling Missing Data in Decision Trees: A Probabilistic Approach},
author={Pasha Khosravi and antonio vergari and YooJung Choi and Yitao Liang and Guy Van den Broeck},
booktitle={ICML Workshop on the Art of Learning with Missing Values (Artemiss)},
year={2020},
url={https://openreview.net/forum?id=0IXcmQIAJPt}
} | 1591760613079 | [{"text": "The authors proposed to use probabilistic circuits in expected predictions for decision trees. Further, they introduced expected loss minimization that improved imputation while missing values are present at learning time.\n\nThe authors mention the weakness of XGBoost, which needs to be trained with missing data to give acceptable performance. However, they used this method in the first experiment, which has a complete training set.\n\nThe explanation regarding the median imputation is unclear. Did you consider the median of the training data for the test imputation, or is the median comes from observed test data?\n\nI recommend comparing this method with other baselines such as C4.5, MICE, and variational autoencoders and add more experiments.\n\nIt would be interesting to see computational time and the source code too.", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 1 |
q1o2mWaOssG | q1o2mWaOssG | q1o2mWaOssG | Brain-inspired predictive coding dynamics improve the robustness of deep neural networks | [
"Bhavin Choksi",
"Milad Mozafari",
"Callum Biggs O'May",
"B. ADOR",
"Andrea Alamia",
"Rufin VanRullen"
] | NeurIPS.cc 2020 Workshop | 2020 | Deep neural networks excel at image classification, but their performance is far less robust to input perturbations than human perception. In this work we address this shortcoming by incorporating brain-inspired recurrent dynamics in deep convolutional networks. We augment a pretrained feedforward classification model (VGG16 trained on ImageNet) with a “predictive coding” strategy: a framework popular in neuroscience for characterizing cortical function. At each layer of the hierarchical model, generative feedback “predicts” (i.e., reconstructs) the pattern of activity in the previous layer. The reconstruction errors are used to iteratively update the network’s representations across timesteps, and to optimize the network's feedback weights over the natural image dataset--a form of unsupervised training. We demonstrate that this results in a network with improved robustness compared to the corresponding feedforward baseline, not only against various types of noise but also against a suite of adversarial attacks. We propose that most feedforward models could be equipped with these brain-inspired feedback dynamics, thus improving their robustness to input perturbations. | [
"predictive coding",
"neuroscience",
"robustness",
"machine learning",
"deep learning"
] | https://openreview.net/pdf?id=q1o2mWaOssG | @inproceedings{
choksi2020braininspired,
title={Brain-inspired predictive coding dynamics improve the robustness of deep neural networks},
author={Bhavin Choksi and Milad Mozafari and Callum Biggs O'May and B. ADOR and Andrea Alamia and Rufin VanRullen},
booktitle={NeurIPS 2020 Workshop SVRHM},
year={2020},
url={https://openreview.net/forum?id=q1o2mWaOssG}
} | 1602229912923 | [{"text": "This is a timely attempt to achieve biologically-plausible robust classification by introducing predictive-coding recurrent dynamics to a pretrained convolutional deep neural network. Adversarial examples pose a difficult challenge to convolutional deep neural networks as models of human recognition. The leading CS-based solution (adversarial training) has no biological plausibility, and it operates by artificially extending the training data instead of introducing a better inductive bias to the model. Therefore, the aim of the current work is very significant.\n\nHowever, I am not sure that this particular implementation (augmenting a fixed feedforward VGG with top-down predictive coding connections) is sufficient for achieving this tall order aim. Unlike the Rao & Ballard 1999 model or its supervised adaptation by Spratling (2017, Cognitive Computation), the training of the weights of the proposed model is not governed by a generative objective. The feedforward connections are not finetuned to support more predictive high-level representation, and the top-down connections only learn to predict this fixed, pretrained representation. Therefore, this network should be conceived and presented as a predictive coding-inspired model rather than a proper implementation of the Bayesian predictive coding approach. This divergence from Bayesian predictive coding may (or may not) explain the qualitatively modest improvements achieved in model robustness. Having said that, I agree with the authors that their work is a step forward from the Wen 2018 PCN model, and I think that it is a step in the right direction. I therefore recommend this paper for presentation in SVRHM.\n\n#### Additional suggested points for improvement down-the-road:\n 1) Introduction: in my opinion, predictive coding is not supported by `ample\u2019 neuroscience evidence. It is actually quite debated. A few balanced reviews on the empirical evidence for predictive coding that can be cited in this context are Heilbron & Chait, 2018 Neuroscience; Aitchison & Lengyel, 2017 Curr Opin Neurobiol; Walsh, McGovern, Clark & O\u2019Connell, 2020 Ann N Y Acad Sci.\n 2) As mentioned above, the model is *not* equivalent to a supervised extension of the Rao & Ballard 1999 model. It would be illuminating if the authors could motivate their proposed model by starting from the classical predictive coding model (i.e., Rao & Ballard) and then explain how (and why) they modified its underlying assumptions to arrive at equations (1) and (2).\n 3) Apply a formal hyperparameter search for $\\beta_n$ $\\lambda_n$ and $\\alpha_n$ instead of manual tuning; Explicitly report the criterion used for hyper-parameter choice, including the cross-validation scheme.\n 4) Robustness evaluation (all panels of figure 2) lacks context and comparison to alternative models. These should include adversarially trained CNNs as well as competing implementations of predictive-coding based classification.\n 5) Test and discuss the effect of minimizing the error of the first timepoint vs. a temporal average of the error.\n 6) Cite and compare to recent similar works (e.g., Huang et al., 2020, arXiv:2007.09200).\n\n#### Minor points:\n 7) Reporting of accuracy (Figure 2 panel b): a relative scale can be used in addition to an absolute scale, not instead of it.\n 8) Reporting of correlation distance (Figure 2, panel d): the y-axis label should be \u2018Normalized Correlation Distance\u2019 and not just \u2018Correlation Distance\u2019. The normalization should be defined and discussed in the text.", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "The problem that the paper addresses is an important and an interesting one. The role of feedback in networks is still disputed and this paper provides some evidence towards a solution. The approach is fairly straightforward yet provides some interesting results. Though there are still many open questions remaining, such as how would this model handle temporal sequences, this nonetheless provides an interesting start.\n\nThe model is also shown to be robust to a number of types of adversarial attacks. However, one of the major concerns and limitations of the paper is that the model and its dynamics have not been tested under the traditional supervised classification setting. It is not clear these feedback connections help with traditional classification error minimization. Though it is impressive that robustness increases with timesteps through the network, it is unclear whether classification noise such as due to intra class nuisance variation can be handled better using such a predictive feedback mechanism?\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 2 |
jE6SlVTOFPV | jE6SlVTOFPV | jE6SlVTOFPV | Iterative VAE as a predictive brain model for out-of-distribution generalization | [
"Victor Boutin",
"Aimen Zerroug",
"Minju Jung",
"Thomas Serre"
] | NeurIPS.cc 2020 Workshop | 2020 | Our ability to generalize beyond training data to novel, out-of-distribution, image degradations is a hallmark of primate vision. The predictive brain, exemplified by predictive coding networks (PCNs), has become a prominent neuroscience theory of neural computation. Motivated by the recent successes of variational autoencoders (VAEs) in machine learning, we rigorously derive a correspondence between PCNs and VAEs. This motivates us to consider iterative extensions of VAEs (iVAEs) as plausible variational extensions of the PCNs. We further demonstrate that iVAEs generalize to distributional shifts significantly better than both PCNs and VAEs. In addition, we propose a novel measure of recognizability for individual samples which can be tested against human psychophysical data. Overall, we hope this work will spur interest in iVAEs as a promising new direction for modeling in neuroscience. | [
"VAE",
"Iterative VAE",
"Predictive Coding",
"Out-of-distribution generalization",
"Generative models",
"Visual perception"
] | https://openreview.net/pdf?id=jE6SlVTOFPV | @inproceedings{
boutin2020iterative,
title={Iterative {VAE} as a predictive brain model for out-of-distribution generalization},
author={Victor Boutin and Aimen Zerroug and Minju Jung and Thomas Serre},
booktitle={NeurIPS 2020 Workshop SVRHM},
year={2020},
url={https://openreview.net/forum?id=jE6SlVTOFPV}
} | 1602229912012 | [{"text": "The paper is very interesting in the sense that it takes the idea of semi amortization in context of variational autoencoders (VAEs) and studies how it might suggest computational accounts of human beings in recognizing out of distribution samples, specifically in a tradeoff of inference time and accuracy on the harder examples.\n\nI have an alternative explanation for why iVAEs work better than VAEs for the noisy example case, which is not necessarily around \u201cmaking out of distribution latent variables in distribution\u201d, which in my opinion does not sound technically correct. Instead here is an alternative hypothesis to explain the observations from the paper:\n\nWhen we add noise to images and feed them through the inference network of a VAE one expects the model to be better at supporting classification of images, since VAEs learn a latent space with compression, that is, low I(X; Z). However, the inference network of such models might not generalize to the out of distribution case, in which case semi-amortized or iterative VAEs provide a much better estimate of the posterior q(z| x) in the noisy case since they actually solve the optimization problem of interest (as opposed to just doing a feedforward pass through the inference network). In any case, with accurate inference the VAE should support classification of the noisy examples. This above explanation also accounts for why higher beta does better in the noisy case as higher beta means more compression, or lower I(X; Z) learnt by the model [A]. \n\nOne technical issue in the derivation of the correspondence of the PCNs and VAEs appears to be that the derivation currently only seems to hold in the discrete case (which is the case for which the delta function is written the way it is right now). In the continuous case, the delta function at z = z* would have a pdf of (tending to) \\inf making log of that also \\inf (in the second term of the derivation). Might be worth clarifying that this is only the case for discrete z (which is admittedly a less common or useful case). \n\n\n[A]: Alemi, Alexander A., Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. 2017. \u201cFixing a Broken ELBO.\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1711.00464.\n", "rating": "6: Marginally above acceptance threshold", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "Summary and contributions: This paper derives the PCN objective from Rao & Ballard as a special case of the VAE objective assuming that (1) the approximating distribution q(z |) is a point estimate and (2) the variance of the likelihood and prior of the generative model are constant. The paper then observes that the EM algorithm used to train PCN and the iterative inference algorithms used to train iVAE follow the same algorithmic structure. The paper last presents experiments that show that iVAE outperforms PCN and VAE on out-of-distribution generalization.\n\nStrengths:\n1. presents a mathematical connection between the objective of PCN and VAE\n2. notices that iVAE is simply implementing an unrolled algorithm like EM\n\nWeaknesses:\n1. while it is worth formalizing the connection between PCN and iVAE, the connection seems rather straightforward and not enormously novel, especially considering progress on recent work in developing iterative inference methods with neural networks: Neural Expectation Maximization [1] implements the EM procedure described in algorithm 1 of this paper, and its successor IODINE [2] implements the iVAE procedure described in algorithm 2 of this paper. Dynamic versions of both have also been proposed: Relational Neural Expectation Maximization [3], and its iVAE counterpart OP3 [4].\n2. That iVAE would perform better in out-of-distribution generalization also seems like a straightforward claim, especially when we consider that iVAE is, in some sense, training on the testing set. However, I agree that it is valuable to have experiments that definitively show this.\n3. I think I would have appreciated more context for how the connection between PCNs and VAEs motivate the authors' hypothesis that iVAE may be better for out-of-distribution generalization. The intro briefly mentioned that cortical feedback has been studied to be useful for solving difficult recognition problems, but it is not clear what the link is between PCNs and out-of-distribution generalization.\n\nRecommendations:\n1. as I see the theoretical component of this paper as rather straightforward, I belive this paper could have the most potential as a large-scale empirical study over multiple datasets on how iterative inference improves out-of-distribution generalization.\n2. I think it is crucial to address Weakness 3 above; otherwise it is not clear how the theoretical portion of the paper connects to the empirical portion.\n\n[1] Greff, K., Van Steenkiste, S., & Schmidhuber, J. (2017). Neural expectation maximization. In Advances in Neural Information Processing Systems (pp. 6691-6701).\n[2] Greff, K., Kaufman, R. L., Kabra, R., Watters, N., Burgess, C., Zoran, D., ... & Lerchner, A. (2019). Multi-object representation learning with iterative variational inference. arXiv preprint arXiv:1903.00450.\n[3] Van Steenkiste, S., Chang, M., Greff, K., & Schmidhuber, J. (2018). Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. arXiv preprint arXiv:1802.10353.\n[4] Veerapaneni, R., Co-Reyes, J. D., Chang, M., Janner, M., Finn, C., Wu, J., ... & Levine, S. (2020, May). Entity abstraction in visual model-based reinforcement learning. In Conference on Robot Learning (pp. 1439-1456). PMLR.", "rating": "6: Marginally above acceptance threshold", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "Interesting insight, novel model worth discussion\n\nThe paper draws a theoretical parallel between iterative VAE (iVAE) and predictive coding networks (PCN; a la Rao & Ballard 1999), thus linking iVAEs to a theory of neuronal coding. Further, the paper shows empirically that iVAE can recover noise-degraded MNIST digits for classification, improving upon both non-iterative VAE and PCN (in addition to the classifier itself without denoising). Thus, iVAE presents a theoretically and biologically-motivated model of recurrent processing and generalization in vision. \n\nThe paper is recommended for acceptance, as it offers an interesting perspective and viable model worth discussing in the workshop. It is not without limitations and questions (below), which do not take away from the interest of the paper for broader discussion.\n\n\nComments:\n- The broad question concerns \"generalization.\" Why is it defined in this specific way, as noise-removal and not, e.g., view-invariance? Also, why is the evaluation metric defined this way? Specifically:\n- Why use a classifier to judge reconstruction? During training, the VAE/PCN models are unaware of classification (right?). Thus, to evaluate the models, why not use a more basic metric such as pixel-wise L2 loss, or another reconstruction loss used during training? Although the practical motivation may be to preserve classification accuracy, it is conterintuitive to evaluate models on something they're not trained on. To put the question another way, the finding seems surprising that these models preserve something so high-level; but this result could be entirely due to preserving much simpler features such as pixel-wise values. This simple explanation is not evaluated or accounted for. \n- The empirical results seem to be not informative, but rather entirely expected from the theoretical results. Specifically, iVAE improves over iterations and outperforms VAE because 1) iVAE starts at similar performance to VAE; 2) iVAE has to improve during iteration because the objective guiding iterations is to improve the reconstruction. Along the same lines, is it surprising that iVAE outperforms PCN? The former has the advantage of an amortized initialization. Even if iVAE and PCN gain the same from iteration (which seems to be the case based on Fig 2a), iVAE will win by having a \"headstart.\" To be sure, results being expectable does not take away from the theoretical interest, which is what enables prediction of the results in the first place. Also, seeing predicted results can be reassuring. But the significance of the empirical results should be put into perspective in text. \n- More informative experiments can also be done. For example, there seems to be an opportunity to separately test the benefit of initialization and of using a full posterior, when comparing iVAE to PCN. Based on your nice theoretical results, iVAE can be reduced to PCN if only the MLE of the posterior is used. Thus, you could test an iVAE that is initialized normally but uses only the MLE during SVI.\n- Part of the motivation of the paper is that \"hierarchical iVAE would provide a better model of primate vision.\" If so, existing relevant results should be discussed, e.g., https://arxiv.org/pdf/1606.05579.pdf, https://www.biorxiv.org/content/10.1101/2020.06.16.155556v1.full.pdf, https://arxiv.org/pdf/2006.14304.pdf.\n\n\nMinor comments:\n- Typo in line 62: q_phi(x|z) should be q_phi(z|x).\n- \"Under the assumption that greater prior strength in the PCN and iVAE reflects the need for greater feedback [29]\": the rationale for this assumption is not transparent, nor is it clear where it is mentioned in reference 29. Please specify.\n- \"how specific measures of recognizability could be derived from the iVAE model resulting in testable predictions for psychophysics which we plan to test in future work\": Can you specify what testable prediction you plan to test? This statement is quite vague. In your own interpretations of the results, it is suggested that decision boundaries, prototypes, and ELBO are likely correlated. How can you actually distinguish them?\n- \"By making explicit connections between cortical feedback and deep generative models\": The connection does not seem explicit, as it is unclear whether feedback relates to SVI (short time scale), to calculating the reconstruction error (both during SVI and during training), to both, or something else in your model. You also suggest that feedback may be related to the prior, but that is not explicit either, as commented above.", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}] | 3 |
hzPuEkTz4q0 | hzPuEkTz4q0 | hzPuEkTz4q0 | How does task structure shape representations in deep neural networks? | [
"Kushin Mukherjee",
"Timothy T. Rogers"
] | NeurIPS.cc 2020 Workshop | 2020 | While modern deep convolutional neural networks can be trained to perform at human levels of object recognition and learn visual features in the process, humans use vision for a host of tasks beyond object recognition including — drawing, acting, and making propositional statements. To investigate the role of task structure on the learned representations in deep networks we trained separate models to perform two tasks that are simple for humans — imagery and sketching. Both models encoded a bitmap image with the same encoder architecture but used either a deconvolutional decoder for the imagery task or an LSTM sequence decoder for the sketching task. We find that while both models learn to perform their respective tasks well, the sketcher model learns representations that can be better decoded to provide visual information about an input including — shape, location, and semantic category highlighting the importance of output task modality in learning robust visual representations.
| [
"Machine Learning",
"Perception",
"Cognitive Science",
"Learning",
"Cognitive Neuroscience",
"Computer Vision"
] | https://openreview.net/pdf?id=hzPuEkTz4q0 | @inproceedings{
mukherjee2020how,
title={How does task structure shape representations in deep neural networks?},
author={Kushin Mukherjee and Timothy T. Rogers},
booktitle={NeurIPS 2020 Workshop SVRHM},
year={2020},
url={https://openreview.net/forum?id=hzPuEkTz4q0}
} | 1602229914632 | [{"text": "The paper analyzes how difference in the way the task is specified can affect the representation learned by the latent variables of an auto-encoder. In particular, an auto-encoder is trained to reconstruct simple stylized images using either a convolutional decoder that outputs raw pixel values, or an LSTM decoder that outputs a sequence of pen strokes. The results show that both models can create a sufficient representation of the input image (the reconstructions are qualitatively good), however while the stroke-based encodes linearly position information.\n\nThe question proposed by the paper \u2014 how the ability of an agent to act and communicate in an environment shape its learned representation \u2014 is indeed interesting and impactful. The paper tackles the setting where an agent is tasked with reproducing an image using two different modalities, for which it introduces a simple toy dataset to test the difference.\n\nIt would be interesting to extend these results to more complex tasks, and also to discuss the relation similar setup described in the literature. For example, http://proceedings.mlr.press/v37/gregor15.pdf describes auto-encoding of an image in multiple steps using an attention mechanism that resembles drawing, in which case they observe a more semantic representation emerging. Also https://arxiv.org/abs/1804.01118, https://arxiv.org/pdf/1910.01007.pdf describe image generation using a stroke based generative model, although they do not focus on the qualities of the learned representation. https://science.sciencemag.org/content/360/6394/1204 notes that when a network is tasked to imagine the scene pictured from a different point of view it learns a representation of the input image that is more semantic.", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "The authors investigate the role of two types of task structure (imagery and sketching) on the learned representations in deep neural networks. The paper is well-written, the proposed modeling of the two tasks (convolutional autoencoder, encoder-LSTM decoder) is interesting and novel. I recommend accepting the paper.", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "Summary\n-------\n\nThis paper explores how different tasks affect the latent representation learned by a network. The authors train two networks: a fairly standard convolutional autoencoder, and, an encoder-decoder model where the encoder is the same as the AE, but the decoder is a two layer lstm which produces coordinates to move a pen to from the current position, as well as the pen state (up/down). A dataset of drawings is generated together with the corresponding pen-stroke paths and used to train the models. The authors then explore what information is captured in the representational vectors produced by the encoder parts of the two models. They find that although both representations contain information about the underlying variables that generated the input, the sketcher model's representation has a significantly stronger (and perhaps more disentangled) representation than the AE. \n\nPositives\n---------\n\n- Obviously very preliminary work, but a good paper overall with interesting initial findings.\n- Well written and easy to follow\n\n\nQuestions/Concerns/Comments\n--------\n\n- Can the dataset (and code to generate it) be made public?\n- Have the authors looked at similar work investigating how differing tasks affect what a model learns (for example I saw this at one of the NeurIPS workshops last year: https://arxiv.org/pdf/1911.05546.pdf)\n- It's probably also worth looking recent work on disentanglement within models - it strikes me that the sketcher learns a more disentangled representation of the underlying features that control the image generation than the AE does.\n- Finally, I'd encourage the authors to revisit the encoder network structure (in future work!) - the max-poolings probably make it quite hard to learn about specific spatial information with high resolution (which could explain the sensitivity); perhaps augmenting the network with spatial information (https://arxiv.org/abs/1807.03247) could help? \n\n\nRationale for score\n-------------------\n\nOverall, I think this would make an excellent poster at the workshop and look forward to talking to the authors about the work. ", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 3 |
dPwyQnHUVvw | dPwyQnHUVvw | dPwyQnHUVvw | CNNs efficiently learn long-range dependencies | [
"Timo Lüddecke",
"Alexander S Ecker"
] | NeurIPS.cc 2020 Workshop | 2020 | The role of feedback (or recurrent) connections is a fundamental question in neuroscience and machine learning. Recently, two benchmarks [1,2], which require following paths in images, have been proposed as examples where recurrence was considered helpful for efficiently solving them.
In this work, we demonstrate that these tasks can be solved equally well or even better using a single efficient convolutional feed-forward neural network architecture.
We analyze ResNet training regarding model complexity and sample efficiency and show that a narrow, parameter-efficient ResNet performs on par with the recurrent and computationally more complex hCNN and td+hCNN models from previous work on both benchmarks.
Code: https://eckerlab.org/code/cnn-efficient-path-tracing | [
"feed-forward",
"CNN",
"recurrence",
"feedback"
] | https://openreview.net/pdf?id=dPwyQnHUVvw | @inproceedings{
l{\"u}ddecke2020cnns,
title={{CNN}s efficiently learn long-range dependencies},
author={Timo L{\"u}ddecke and Alexander S Ecker},
booktitle={NeurIPS 2020 Workshop SVRHM},
year={2020},
url={https://openreview.net/forum?id=dPwyQnHUVvw}
} | 1602229911564 | [{"text": "This paper seeks to understand the role of recurrence/feedback in neural system. While literature suggests that adding recurrence provides a significant boost to network performance for certain tasks, this paper aims to demonstrate that equivalent or better quality can also be achieved by a feed-forward network. This is a useful result that indicates that more analysis is necessary to fully characterize the necessity and utility of recurrence in neural networks.\n\nThe authors utilize the Pathfinder and cABC datasets for their experiment, which have been previously used to demonstrate the superiority of recurrence. They present results on variants of a ResNet18 model as the feed-forward network, finding that it is capable of beating the accuracy of prior state of art.\n\nThis work certainly causes the reader to question whether recurrence (with its training and representational complexity) is better than a carefully designed feed-forward architecture. The result is technically sound and supports the conclusion for the chosen architecture and datasets. Hence, I vote for acceptance.\n\nHowever, a deeper dive into why residual networks perform unexpectedly good at these tasks would have helped the reader further. E.g.\n\n* If we removed the residual connections, how much bigger would the network have to be for identical accuracy?\n* Can recurrence help realize an even smaller ResNet?\n\nEssentially, I am curious whether the improvements due to choosing a ResNet and due to using recurrence are orthogonal and additive. That would help shed some more light on the nature of these results. Regardless, it would be great to see this work published.", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"text": "Summary:\nThis is a re-examination of recent works that have described a limited ability of feedforward models (ResNets as an example) to solve tasks which are designed to evoke perceptual grouping routines. They perform experiments over a range of training-set sizes not reported in the papers that introduced these tasks, and find the number of examples needed for successfully training ResNets. This number is much greater than what the feedback networks reported in those papers require. These results are paired with replications from the original papers, along with novel results on the stability of training for ResNets.\n\nStrengths:\n\nThe authors replicated the results in Kim et al., 2020, which is great to see! Especially the dissociation between the importance of horizontal and top-down connections on Pathfinder vs. cABC.\n\nWeaknesses:\n\nThe authors' argue a point about the ability of ResNets to learn Pathfinder and cABC. This is not disputed in Kim et al., 2020. In fact, in Figure S8 they report results from different ResNet parameterizations on Pathfinder and cABC, much like the authors do here. Kim and colleagues show that *you can* change the architecture to do better, but that change doesn't hold for both tasks.\n\nBut the most critical point is that Kim and colleagues argue that feedback connections are important for *sample efficiency*. This is reinforced by the findings in the current paper. The authors do not describe a ResNet that can trace paths as efficiently as feedback networks.\n\nIn order to demonstrate that a ResNet can trace paths as efficiently or better than feedback networks, the authors must present a ResNet which can achieve comparable or better performance on the same or fewer training examples. The authors fail to demonstrate this.\n\n", "rating": "3: Clear rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"text": "This paper demonstrates that benchmarks constructed to motivate the use of recurrent vision architectures can actually be solved by standard ResNets, with similar parameter- and sample-efficiency.\n\nStrengths:\n\nThe submission provides compelling evidence both that Pathfinder and cABC tasks are too easy to be worth studying and that their use to motivate exotic architectures is misguided. This seems like a valuable contribution.\n\nThe work not only demonstrates that off-the-shelf ResNets can work, but also shows that they can be tuned to achieve similar parameter- and sample-efficiency.\n\nWeaknesses:\n\nThe paper that introduced the Pathfinder dataset (ref. 1) evaluates ResNet-18 and claims that it achieves high accuracy at path lengths of 6 and 9 but fails at path length 14. I\u2019m inclined to believe the results presented here, because I think it\u2019s easier to make models fail than to make them work. Nonetheless, it would strengthen the work to know exactly how the training setup differs from those of refs. 1 and 2 of the submission that makes the ResNets here succeed whereas that previous work failed.\n\nIt\u2019s unclear to me whether results regarding training time and stability in Figure 3, Table 2, and Figure 4 are meaningful. These findings are potentially sensitive to hyperparameters, but the authors do not appear tune the hyperparameters for different model or batch sizes. The authors seem to be surprised on L141 that tuning hyperparameters improves accuracy, but the value of hyperparameter tuning in deep learning, and particularly of tuning the learning rate, is well-established (see e.g. section 11.4 of the Goodfellow et al. deep learning book [1]). In addition, I wonder how much of the instability reported by the authors is related to the use of Adam with default values of the hyperparameters. Although refs 1 and 2 use Adam, most work involving ResNets uses SGD + momentum as in He et al. [2].\n\nIn Section 2.4.2, I'm not sure that the use of augmentation or pretraining is fair, since those methods effectively increase the training set size and could also improve sample efficiency of the baseline recurrent model. Nonetheless, the authors seem to achieve similar performance at the same number of training samples on Pathfinder simply by tuning the learning rate. On cABC, there is still a bit of a gap.\n\n[1] Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016).\u00a0Deep learning\u00a0(Vol. 1, p. 2). Cambridge: MIT press.\n[2] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In\u00a0Proceedings of the IEEE conference on computer vision and pattern recognition\u00a0(pp. 770-778).", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}] | 3 |
MW8-beeRCrB | MW8-beeRCrB | MW8-beeRCrB | Quantifying Adversarial Sensitivity of a Model as a Function of the Image Distribution | [
"Anonymous"
] | NeurIPS.cc 2020 Workshop | 2020 | In this paper, we propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model over a particular $\epsilon$-interval $[\epsilon_0, \epsilon_1]$ (interval of adversarial perturbation strengths) that facilitates comparisons across models when they have different initial $\epsilon_0$ performance. This can be used to determine how adversarially sensitive a model is to different image distributions; and/or to measure how robust a model is comparatively to other models for the same distribution. We used this adversarial robustness metric on MNIST, CIFAR-10, and a Fusion dataset (CIFAR-10 + MNIST) where trained models performed either a digit or object recognition task using a LeNet, ResNet50, or a fully connected network (FullyConnectedNet) architecture and found the following: 1) CIFAR-10 models are more adversarially sensitive than MNIST models; 2) Pretraining with another image distribution \textit{sometimes} carries over the adversarial sensitivity induced from the image distribution -- contingent on the pretrained image manifold; 3) Increasing the complexity of the image manifold increases the adversarial sensitivity of a model trained on that image manifold, but also shows that the task plays a role on the sensitivity. Collectively, our results imply non-trivial differences of the learned representation space of one perceptual system over another given its exposure to different image statistics (mainly objects vs digits). Moreover, these results hold even when model systems are equalized to have the same level of performance, or when exposed to matched image statistics of fusion images but with different tasks. | [
"Adversarial Robustness",
"Image Statistics",
"Explainable Machine Learning",
"Empirical Analysis"
] | https://openreview.net/pdf?id=MW8-beeRCrB | @misc{
anonymous2020quantifying,
title={Quantifying Adversarial Sensitivity of a Model as a Function of the Image Distribution},
author={Anonymous},
year={2020},
url={https://openreview.net/forum?id=MW8-beeRCrB}
} | 1602229914789 | [] | 0 |
bYTPqOKLVmO | bYTPqOKLVmO | bYTPqOKLVmO | Generalization of information - integrative encoding or category-based inference? | [
"Jessica Taylor",
"Helen Barron",
"Dasa Zeithamova",
"Masamichi Sakagami",
"Aurelio Cortese",
"Xiaochuan Pan"
] | ccneuro.org 2020 Workshop | 2020 | Title: Generalization of information - Integrative encoding or category-based inference?
Scientific question: How do biological organisms generalize previously-learned information for adaptive behaviour in novel experiences? This broad question spans interdisciplinary fields - from decision-making and perception, psychology, memory neuroscience, to theoretical considerations in machine learning and artificial intelligence (Cortese et al. 2019). One prominent theory (integrative encoding) suggests that when new memories are being made, we activate information from overlapping memory representations so that information from both are integrated and reencoded together. Behaviour in novel situations can be guided by simple recall of information related to these extended associative links. Although this mechanism is very simple, computationally this process could become tedious given the inestimable extent of potential (and sometimes unnecessary) associative links that could be formed. A different theory (category-based inference) suggests a computationally simpler mechanism: that humans use abstract thoughts, such as functional categorization, to make their behaviour more efficient. Categories provide a logical structure through which information learned for one stimulus may be generalized to other stimuli (members of the same category). However, this theory requires much higher order, complicated, abstractions than does the former. One possibility is that both of these theories might be valid, with an organism implementing different strategies dependent on their current circumstances. However, to date there exists no study that has clearly dissected the contribution of both theories, nor made formal predictions on how they could or should differ in neural or behavioural terms. | [
"information",
"integrative encoding",
"inference",
"generalization",
"behaviour",
"associative links",
"theories",
"title",
"scientific question",
"biological organisms"
] | https://openreview.net/pdf?id=bYTPqOKLVmO | null | 1596481565129 | [{"text": "This proposal investigates the cause of generalization for biological neural systems. This is of significant interest to both the neuroscience and artificial intelligence communities. In particular, the utility of this type of learned generalization is only beginning to be discussed in the context of artificial agents. It is clearly written and if the experiments lead to clear distinctions between the integrative encoding theory and the category-based inference ideas, it would be of significant impact to theoretical models both in neuroscience and machine learning. Since much of the proposal revolves around generalization, a larger focus on how to objectively define generalization would make this proposal even stronger. Since it will be used as the main performance metric where conclusions will be drawn from, more expansion on how this will be concretely tested would be helpful.", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}] | 1 |
qmTf4gN1gu | qmTf4gN1gu | qmTf4gN1gu | Synthesizing lesions using contextual GANs improves breast cancer classification on mammograms | [
"Eric Wu",
"Kevin Wu",
"William Lotter"
] | MIDL.io 2020 Conference | 2020 | Data scarcity and class imbalance are two fundamental challenges in many machine learning applications to healthcare. Breast cancer classification in mammography exemplifies these challenges, with a malignancy rate of around 0.5% in a screening population, which is compounded by the relatively small size of lesions (~1% of the image) in malignant cases. Simultaneously, the prevalence of screening mammography creates a potential abundance of non-cancer exams to use for training. Altogether, these characteristics lead to overfitting on cancer cases, while under-utilizing non-cancer data. Here, we present a novel generative adversarial network (GAN) model for data augmentation that can realistically synthesize and remove lesions on mammograms. With self-attention and semi-supervised learning components, the U-net-based architecture can generate high resolution (256x256px) outputs, as necessary for mammography. When augmenting the original training set with the GAN-generated samples, we find a significant improvement in malignancy classification performance on a test set of real mammogram patches. Overall, the empirical results of our algorithm and the relevance to other medical imaging paradigms point to potentially fruitful further applications. | [
"mammography",
"gan",
"data augmentation",
"cancer"
] | https://openreview.net/pdf?id=qmTf4gN1gu | @misc{
wu2020synthesizing,
title={Synthesizing lesions using contextual {GAN}s improves breast cancer classification on mammograms},
author={Eric Wu and Kevin Wu and William Lotter},
year={2020},
url={https://openreview.net/forum?id=qmTf4gN1gu}
} | 1579955631643 | [] | 0 |
mOj4_RDHBD | mOj4_RDHBD | mOj4_RDHBD | Single-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance Images | [
"Anurag Garikipati",
"Rajesh Venkataraman"
] | MIDL.io 2020 Conference | 2020 | Fusion of magnetic resonance images (MRI) with ultrasound has led to major improvements in precision diagnostics for prostate cancer. A key step in the fusion process is segmentation of the prostate in MRI and machine learning (ML) has proven to be a valuable tool for segmentation. In this paper, we compare two ML workflows for prostate segmentation; a single-stage and multi-stage ML algorithm to address the challenges of prostate segmentation. | [
"Machine Learning",
"Prostate Segmentation",
"Magnetic Resonance Imaging"
] | https://openreview.net/pdf?id=mOj4_RDHBD | @misc{
garikipati2020singlestage,
title={Single-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance Images},
author={Anurag Garikipati and Rajesh Venkataraman},
year={2020},
url={https://openreview.net/forum?id=mOj4_RDHBD}
} | 1579955743873 | [{"text": "VEry small paper. Method not novel. Miss a lot of details to evaluate results.\n\nSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance Images", "rating": "2: Weak reject", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "This paper presents a pipeline to perform automatic prostate segmentation in MRI. The main assumption is that segmentation performance would benefit from a separate processing of prostate MR images that contain seminal vesicles from those that do not. The proposed architecture consists of a first classification network that is trained to separate images with or without seminal vesicles. Each class of images is then processed through a separate UNet network. This architecture is compared to a standard UNet architecture trained on both types of images. \nThis paper suffers from several flaws that should be addressed.\nRegarding the methodological part:\n-The architecture of the classification network should be provided. \n-Description of the MRI dataset is critically missing, including the MRI sequence parameters, scanner, acquisition parameters. A reference to the \u2018Artemis\u2019 database should be provided.\n-Regarding the evaluation, from what I understand, the authors adopted a resubstitution method (ie train and test on the same dataset) : \u201dall images were used for the training and testing of the network\u201d. The text should be clarified if I misunderstood. Else, evaluation should be performed in a cross-validation or hold-out scenario to avoid producing optimistically biased results. \n -Regarding the quantitative results, it is not clear if the reported accuracies and DSC for the multi-stage model were estimated from images passed through the segmentation UNet after the classification step or not. If yes, then this means that these values reflect (ie account for) the imperfect accuracy of the classification model (0.8828) which can thus erroneously direct images with vesicles in the no-vesicle UNet and vice versa. If not, then the reported performance only evaluate segmentation performance of each type of images (with or without vesicles). In this latter situation, the authors should perform the whole evaluation accounting for classification step and following a cross-validation strategy as suggested above.\n-Quantitative performance by the standard UNet model trained on both types of images are similar to that reported by the two-stage model, thus suggesting that the two-stage model may not be competitive, since it requires to train three deep models instead of one. Please comment.\n", "rating": "1: Strong reject", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"text": "Multi-stage training (with detection network as the first stage) is not new, for example, those with mask RCNN.\n\nCompared with a single-stage approach: with significantly higher costs including additional computation resources (two stages) and manual efforts (labelling each slice w/o seminal vesicles), the overall performance gain seems to be marginal (0.9105, 0.9035 vs 0.9063 in Dice score).", "rating": "1: Strong reject", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}] | 3 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 39