title
stringlengths
8
238
introduction
stringlengths
10
3.28k
conclusion
stringlengths
15
2.85k
prompt
stringlengths
510
5.5k
“Merging Results Is No Easy Task” : An International Survey Study of Collaborative Data Analysis Practices Among UX Practitioners
Usability testing is a frequently employed user-centred design method for detecting usability problems [ 25 , 110 ]. Despite being effective and useful, analyzing usability test sessions can be tedious, challenging, and time-consuming [ 12 , 25 , 29 , 81 ]. When analyzing usability test sessions, user experience (UX) practitioners often need to attend to multiple behavioral signals in both the visual and audio channels of the recordings and conduct several tasks (e.g., look at user’s actions, listen to user’s verbalizations, and write their own annotations) simultaneously [ 12 ]. Furthermore, UX prac- titioners working in industry often face time pressure to deliver their analysis results [ 25 , 29 , 64 ]. Consequently, it is not uncommon that UX practitioners might miss important usability problems or misinterpret them [26, 38].
We have conducted an international survey to understand the prac- tices and challenges of collaboration in the context of conducting data analysis on usability test sessions. Based on the responses of 279 participants who had varying UX experience and worked in different geographic locations, we found that UX practitioners col- laborate to primarily identify more usability problems and generate redesign suggestions , and to improve reliability of results to a lesser extent. We identified three modes of collaboration: independently analyze different portions of the data with little or no overlap and then collaborate (i.e., divide and conquer) , collaboratively analyze the session with little or no independent analysis (i.e., group analysis) , and independently analyze the same set of data and then collaborate . Although the third mode was recommended by the literature, it was least adopted among our respondents because it was perceived to be more time-consuming than the other two. Moreover, most of the respondents encountered challenges related to lack of time as 70% needed to complete analysis within 1 week, impacting their choice of collaboration modes . These findings highlight an opportunity to address the trade-offs between efficiency and validity of analysis results. What’s more, respondents also experienced disagreements with colleagues regarding usability problems and difficulty in merg- ing analysis from multiple practitioners. Our survey findings could potentially inform UX practitioners about how their colleagues per- ceive collaboration during data analysis. In addition, our findings reveal opportunities for developing better methods and tools to facilitate collaboration during analysis , for example, developing an integrated platform to support both analysis and collaboration in one place, establishing group analysis procedures, supporting both synchronous and asynchronous collaboration, and integrating team members with various skill levels.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*“Merging Results Is No Easy Task” : An International Survey Study of Collaborative Data Analysis Practices Among UX Practitioners* and the introduction:Usability testing is a frequently employed user-centred design method for detecting usability problems [ 25 , 110 ]. Despite being effective and useful, analyzing usability test sessions can be tedious, challenging, and time-consuming [ 12 , 25 , 29 , 81 ]. When analyzing usability test sessions, user experience (UX) practitioners often need to attend to multiple behavioral signals in both the visual and audio channels of the recordings and conduct several tasks (e.g., look at user’s actions, listen to user’s verbalizations, and write their own annotations) simultaneously [ 12 ]. Furthermore, UX prac- titioners working in industry often face time pressure to deliver their analysis results [ 25 , 29 , 64 ]. Consequently, it is not uncommon that UX practitioners might miss important usability problems or misinterpret them [26, 38]."},{"role":"assistant","content":"Conclusion: We have conducted an international survey to understand the prac- tices and challenges of collaboration in the context of conducting data analysis on usability test sessions. Based on the responses of 279 participants who had varying UX experience and worked in different geographic locations, we found that UX practitioners col- laborate to primarily identify more usability problems and generate redesign suggestions , and to improve reliability of results to a lesser extent. We identified three modes of collaboration: independently analyze different portions of the data with little or no overlap and then collaborate (i.e., divide and conquer) , collaboratively analyze the session with little or no independent analysis (i.e., group analysis) , and independently analyze the same set of data and then collaborate . Although the third mode was recommended by the literature, it was least adopted among our respondents because it was perceived to be more time-consuming than the other two. Moreover, most of the respondents encountered challenges related to lack of time as 70% needed to complete analysis within 1 week, impacting their choice of collaboration modes . These findings highlight an opportunity to address the trade-offs between efficiency and validity of analysis results. What’s more, respondents also experienced disagreements with colleagues regarding usability problems and difficulty in merg- ing analysis from multiple practitioners. Our survey findings could potentially inform UX practitioners about how their colleagues per- ceive collaboration during data analysis. In addition, our findings reveal opportunities for developing better methods and tools to facilitate collaboration during analysis , for example, developing an integrated platform to support both analysis and collaboration in one place, establishing group analysis procedures, supporting both synchronous and asynchronous collaboration, and integrating team members with various skill levels."}]
Understanding Peer Review of Software Engineering Papers
You are invited to participate in a study entitled ”Identifying Best Practices in Peer Review” that is being conducted by Neil Ernst, Daniel Mendez, JeffCarver, and Marco Torchiano.
This study conducted an interview and survey-based study to understand some of the mechanisms by which reviewers in software engineering conduct reviews. We secured a large cohort of respondents derived from program committee mem- bers and journal reviewers of the past few years, many of whom have received recognition for their excellent reviews. Our paper outlined what these respondents see happening in the peer review world, including a large amount of time spent on reviews. We then explained the practices common to these respondents, fol- lowed by some of their tips and beliefs about what should be in a review, and how paper writers can optimize their paper for reviewers. Most important properties of a good review are for it to be helpful and factually-based. Finally, while we initially thought the perceptions and practices of the award-winners would differ from non-award winners, the results show remarkably few large differences.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Understanding Peer Review of Software Engineering Papers* and the introduction:You are invited to participate in a study entitled ”Identifying Best Practices in Peer Review” that is being conducted by Neil Ernst, Daniel Mendez, JeffCarver, and Marco Torchiano."},{"role":"assistant","content":"Conclusion: This study conducted an interview and survey-based study to understand some of the mechanisms by which reviewers in software engineering conduct reviews. We secured a large cohort of respondents derived from program committee mem- bers and journal reviewers of the past few years, many of whom have received recognition for their excellent reviews. Our paper outlined what these respondents see happening in the peer review world, including a large amount of time spent on reviews. We then explained the practices common to these respondents, fol- lowed by some of their tips and beliefs about what should be in a review, and how paper writers can optimize their paper for reviewers. Most important properties of a good review are for it to be helpful and factually-based. Finally, while we initially thought the perceptions and practices of the award-winners would differ from non-award winners, the results show remarkably few large differences."}]
Communication in Immersive Social Virtual Reality: A Systematic Review of 10 Years’ Studies
Communication is defined as the transmission of resources, such as knowledge, data, and skills, among different parties using shared symbols and media [ 12 , 62 ]. Efficient communication is an essential demand which would impact work efficiency and user experiences in contexts such as collaboration [ 64 ], social contact [ 35 ], meeting [ 1 ], education [ 45 ], and gaming [ 65 ]. Recently, with the outbreak of COVID-19, the need for remote communication has increased dramatically [ 18 , 26 , 30 , 52 ]. The current mainstream technology for remote communication is video conferencing systems such as Zoom and Skype. However, these types of communication media do not provide an immersive environment as face-to-face does. As a result, they do not provide users with the feeling of bodily closeness, emotional closeness, and the experiences of physical presence. They also limit users’ interaction with the environment and objects, such as visiting museums or traveling together [22].
Communication experiences and quality are crucial in social VR for collaboration and building social connectedness. We have con- ducted a systematic review using the PRISMA method to understand how communication has been studied in social VR in the past ten years’ published papers in the major digital libraries (i.e., ACM Digi- tal Library, IEEE Xplore and Springer). According to our results, we identified the key factors that affect people’s communication experi- ence and quality in social VR, including the representation of avatar bringing the sense of anonymity and relax, various approaches to convey emotion and exchange information. Furthermore, we inves- tigated the methodology used in surveyed studies and identified research opportunities to improve the design of social VR applica- tions to better support communication and make it more accessible in the social VR. We discussed and proposed future research direc- tions according to the results of paper reviews, such as using social VR as communication media for remote family relatives.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Communication in Immersive Social Virtual Reality: A Systematic Review of 10 Years’ Studies* and the introduction:Communication is defined as the transmission of resources, such as knowledge, data, and skills, among different parties using shared symbols and media [ 12 , 62 ]. Efficient communication is an essential demand which would impact work efficiency and user experiences in contexts such as collaboration [ 64 ], social contact [ 35 ], meeting [ 1 ], education [ 45 ], and gaming [ 65 ]. Recently, with the outbreak of COVID-19, the need for remote communication has increased dramatically [ 18 , 26 , 30 , 52 ]. The current mainstream technology for remote communication is video conferencing systems such as Zoom and Skype. However, these types of communication media do not provide an immersive environment as face-to-face does. As a result, they do not provide users with the feeling of bodily closeness, emotional closeness, and the experiences of physical presence. They also limit users’ interaction with the environment and objects, such as visiting museums or traveling together [22]."},{"role":"assistant","content":"Conclusion: Communication experiences and quality are crucial in social VR for collaboration and building social connectedness. We have con- ducted a systematic review using the PRISMA method to understand how communication has been studied in social VR in the past ten years’ published papers in the major digital libraries (i.e., ACM Digi- tal Library, IEEE Xplore and Springer). According to our results, we identified the key factors that affect people’s communication experi- ence and quality in social VR, including the representation of avatar bringing the sense of anonymity and relax, various approaches to convey emotion and exchange information. Furthermore, we inves- tigated the methodology used in surveyed studies and identified research opportunities to improve the design of social VR applica- tions to better support communication and make it more accessible in the social VR. We discussed and proposed future research direc- tions according to the results of paper reviews, such as using social VR as communication media for remote family relatives."}]
Deep Learning for Image Denoising: A Survey
Image processing has numerous applications including image segmentation [28], image classification [25,38,32,12], object detection [13], video tracking [36], im- age restoration [48] and action recognition [35]. Especially, the image denoising technology is one of the most important branches of image processing technolo- gies and is used as an ex-ample to show the development of the image processing technologies in last 20 years [42]. Buades et al. [5] proposed a non-local algorithm method to deal with image denoising. Lan et al. [19] fused the belief propagation inference method and Markov Random Fields (MRFs) to address image denois- ing. Dabov et al. [9] proposed to transform grouping similar two-dimensional im- age fragments into three-dimensional data arrays to improve sparisty for image denoising. These selection and extraction methods have amazing performance for image denoising. However, the conventional methods have two challenges [45]. First, these methods are non-convex, which need to manually set parameters. Second, these methods refer a complex optimization problem for the test stage, resulting in high computational cost.
This paper first comprehensively introduces the development of deep learning technologies on image processing applications. And then shows the implementa- tions of typical CNNs. After that, image denoising is illustrated in detail, which concludes the differences and ideas of different methods for image denoising in real world. Finally, this paper shows the challenges of deep learning methods for image processing applications and offers solutions. This review offers important cues on deep learning technologies for image processing applications. We believe that this paper could pro-vide researchers with a useful guideline working in the related fields, especially for the beginners worked in deep-learning.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Deep Learning for Image Denoising: A Survey* and the introduction:Image processing has numerous applications including image segmentation [28], image classification [25,38,32,12], object detection [13], video tracking [36], im- age restoration [48] and action recognition [35]. Especially, the image denoising technology is one of the most important branches of image processing technolo- gies and is used as an ex-ample to show the development of the image processing technologies in last 20 years [42]. Buades et al. [5] proposed a non-local algorithm method to deal with image denoising. Lan et al. [19] fused the belief propagation inference method and Markov Random Fields (MRFs) to address image denois- ing. Dabov et al. [9] proposed to transform grouping similar two-dimensional im- age fragments into three-dimensional data arrays to improve sparisty for image denoising. These selection and extraction methods have amazing performance for image denoising. However, the conventional methods have two challenges [45]. First, these methods are non-convex, which need to manually set parameters. Second, these methods refer a complex optimization problem for the test stage, resulting in high computational cost."},{"role":"assistant","content":"Conclusion: This paper first comprehensively introduces the development of deep learning technologies on image processing applications. And then shows the implementa- tions of typical CNNs. After that, image denoising is illustrated in detail, which concludes the differences and ideas of different methods for image denoising in real world. Finally, this paper shows the challenges of deep learning methods for image processing applications and offers solutions. This review offers important cues on deep learning technologies for image processing applications. We believe that this paper could pro-vide researchers with a useful guideline working in the related fields, especially for the beginners worked in deep-learning."}]
Abstract
Outsourcing data storage to cloud-based services gains popularity among all kinds of users, as it seems to be a reasonable alternative to a private cloud. Cost aside, customers subordinate their choice of an adequate cloud provider to various factors, particularly availability, security, and privacy of the stored data.
In previous sections, we analyzed existing distributed storage systems provid- ing additional secrecy by use of fragmentation. We also presented database fragmentation solutions separating data in order to avoid full encryption. Few systems focus on providing a long term, secure and non-costly data storage. An- other motivation is the possibility of minimizing encryption inside databases, while still providing a good level of data protection.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Abstract* and the introduction:Outsourcing data storage to cloud-based services gains popularity among all kinds of users, as it seems to be a reasonable alternative to a private cloud. Cost aside, customers subordinate their choice of an adequate cloud provider to various factors, particularly availability, security, and privacy of the stored data."},{"role":"assistant","content":"Conclusion: In previous sections, we analyzed existing distributed storage systems provid- ing additional secrecy by use of fragmentation. We also presented database fragmentation solutions separating data in order to avoid full encryption. Few systems focus on providing a long term, secure and non-costly data storage. An- other motivation is the possibility of minimizing encryption inside databases, while still providing a good level of data protection."}]
Deep Long-Tailed Learning: A Survey
D EEP learning allows computational models, composed of multiple processing layers, to learn data representations with multiple levels of abstraction [1], [2] and has made incredible progress in computer vision [3], [4], [5], [6], [7], [8]. The key enablers of deep learning are the availability of large-scale datasets, the emergence of GPUs, and the advancement of deep network architectures [9]. Thanks to the strong ability of learning high- quality data representations, deep neural networks have been applied with great success to many visual discriminative tasks, including image classification [6], [10], object detection [7], [11] and semantic segmentation [8], [12].
In this survey, we have extensively reviewed classic deep long- tailed learning methods proposed before mid-2021, according to the taxonomy of class re-balancing, information augmentation and module improvement. We have empirically analyzed several state-of-the-art long-tailed methods by evaluating to what extent they address the issue of class imbalance, based on a newly proposed relative accuracy metric. Following that, we discussed the main application scenarios of long-tailed learning, and identified potential innovation directions for methods and task settings. We expect that this timely survey not only provides a better understanding of long-tailed learning for researchers and the community, but also facilitates future research.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Deep Long-Tailed Learning: A Survey* and the introduction:D EEP learning allows computational models, composed of multiple processing layers, to learn data representations with multiple levels of abstraction [1], [2] and has made incredible progress in computer vision [3], [4], [5], [6], [7], [8]. The key enablers of deep learning are the availability of large-scale datasets, the emergence of GPUs, and the advancement of deep network architectures [9]. Thanks to the strong ability of learning high- quality data representations, deep neural networks have been applied with great success to many visual discriminative tasks, including image classification [6], [10], object detection [7], [11] and semantic segmentation [8], [12]."},{"role":"assistant","content":"Conclusion: In this survey, we have extensively reviewed classic deep long- tailed learning methods proposed before mid-2021, according to the taxonomy of class re-balancing, information augmentation and module improvement. We have empirically analyzed several state-of-the-art long-tailed methods by evaluating to what extent they address the issue of class imbalance, based on a newly proposed relative accuracy metric. Following that, we discussed the main application scenarios of long-tailed learning, and identified potential innovation directions for methods and task settings. We expect that this timely survey not only provides a better understanding of long-tailed learning for researchers and the community, but also facilitates future research."}]
A Survey on Cross-Lingual Summarization
To help people efficiently grasp the gist of docu- ments in a foreign language, Cross-Lingual Sum- marization (XLS) aims to generate a summary in the target language from the given document(s) in a different source language. This task could be re- garded as a combination of monolingual summa- rization (MS) and machine translation (MT), both of which are unsolved natural language process- ing (NLP) tasks and have been continuously stud- ied for decades ( Paice , 1990 ; Brown et al. , 1993 ), which makes XLS a challenging task.
In this paper, we present the first comprehensive survey of current research efforts on XLS. We sys- tematically summarize existing XLS datasets and methods, highlight their characteristics and com- pare them with each other to provide deeper anal- yses. In addition, we give multiple perspective di- rections to facilitate further research on XLS. We hope that this XLS survey could provide a clear picture of this topic and boost the development of the current XLS technologies.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Cross-Lingual Summarization* and the introduction:To help people efficiently grasp the gist of docu- ments in a foreign language, Cross-Lingual Sum- marization (XLS) aims to generate a summary in the target language from the given document(s) in a different source language. This task could be re- garded as a combination of monolingual summa- rization (MS) and machine translation (MT), both of which are unsolved natural language process- ing (NLP) tasks and have been continuously stud- ied for decades ( Paice , 1990 ; Brown et al. , 1993 ), which makes XLS a challenging task."},{"role":"assistant","content":"Conclusion: In this paper, we present the first comprehensive survey of current research efforts on XLS. We sys- tematically summarize existing XLS datasets and methods, highlight their characteristics and com- pare them with each other to provide deeper anal- yses. In addition, we give multiple perspective di- rections to facilitate further research on XLS. We hope that this XLS survey could provide a clear picture of this topic and boost the development of the current XLS technologies."}]
A Review of In-Memory Space-Efficient Data Structures for Temporal Graphs
Widespread adoption of complex network concepts in information technolo- gies has driven the creation of large volumes of data to be modelled as graphs. This increasing volume of data produced at high speeds brings us new chal- lenges. On one hand, we need efficient computational mechanisms to persist data that evolve continuously over time on cheap external storage. On the other hand, we need specialized techniques to load these data in faster (and more expensive) memories using minimal space and, then, process queries as fast as possible to construct valuable knowledge.
This paper reviewed studies about data structures for storing and querying temporal graphs in primary memory. We noted that many authors use data compression techniques to reduce the amount of space per contact needed to store temporal graphs. There are also succinct data structures that allows a variety of useful queries with time complexity of queries similar to non- compressed data structures. Therefore, algorithms could efficiently process large amount of data in primary memory.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Review of In-Memory Space-Efficient Data Structures for Temporal Graphs* and the introduction:Widespread adoption of complex network concepts in information technolo- gies has driven the creation of large volumes of data to be modelled as graphs. This increasing volume of data produced at high speeds brings us new chal- lenges. On one hand, we need efficient computational mechanisms to persist data that evolve continuously over time on cheap external storage. On the other hand, we need specialized techniques to load these data in faster (and more expensive) memories using minimal space and, then, process queries as fast as possible to construct valuable knowledge."},{"role":"assistant","content":"Conclusion: This paper reviewed studies about data structures for storing and querying temporal graphs in primary memory. We noted that many authors use data compression techniques to reduce the amount of space per contact needed to store temporal graphs. There are also succinct data structures that allows a variety of useful queries with time complexity of queries similar to non- compressed data structures. Therefore, algorithms could efficiently process large amount of data in primary memory."}]
A Comprehensive Survey on Networking over TV White Spaces
In a historic ruling in 2008, the Federal Communication Commission (FCC) in the United States opened up the TV white space spectrum for unlicensed secondary usage [1]. TV white spaces refer to the allocated but locally unused TV channels (between 54 MHz and 698 MHz in the US). Similar regulations have been adopted by several other countries including UK [2], Canada [3], Singapore [4], Malaysia [5], and South Africa [6]. Unlicensed secondary devices such as Wi-Fi [7], ZigBee [8], and Bluetooth [9] are allowed to access and operate over the TV white spaces without interfering the primary users, i.e., TV stations or other licensed users of the spectrum. To access TV white spaces, an unlicensed device can either query a cloud-hosted geo-location spec- trum database or perform a sensing operation to determine the energy of the spectrum [1]. In 2010, FCC mandated that an unlicensed device must query a database to learn about the TV white spaces in its location [10]. Also, FCC kept the sensing technique as an optional choice for secondary users [10].
The 2008 FCC ruling in the United States on TV white space spectrum has opened up new opportunities for unlicensed operation in the TV band. TV white spaces’ availability, diverse bandwidth, and excellent propagational character- istics make them suitable for long range, low-power, and large area applications such as sensing and monitoring ap- plications, agricultural IoT applications, wireless broadband access, real-time applications, smart and connected commu- nities, smart utility applications, etc. Several new network architectures and protocols have been proposed solely tar- geting the TV white space spectrum as well. However, TV white space fragmentation, temporal diversity, and spatial diversity make it challenging to adopt them directly due to interference and coexistence issues. In this paper, we have investigated these key research challenges and provided future directions. We have then provided key insights and comparison between several protocols built on top of TV white spaces. Also, we have presented the opportunities of TV white space protocols in great details.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Comprehensive Survey on Networking over TV White Spaces* and the introduction:In a historic ruling in 2008, the Federal Communication Commission (FCC) in the United States opened up the TV white space spectrum for unlicensed secondary usage [1]. TV white spaces refer to the allocated but locally unused TV channels (between 54 MHz and 698 MHz in the US). Similar regulations have been adopted by several other countries including UK [2], Canada [3], Singapore [4], Malaysia [5], and South Africa [6]. Unlicensed secondary devices such as Wi-Fi [7], ZigBee [8], and Bluetooth [9] are allowed to access and operate over the TV white spaces without interfering the primary users, i.e., TV stations or other licensed users of the spectrum. To access TV white spaces, an unlicensed device can either query a cloud-hosted geo-location spec- trum database or perform a sensing operation to determine the energy of the spectrum [1]. In 2010, FCC mandated that an unlicensed device must query a database to learn about the TV white spaces in its location [10]. Also, FCC kept the sensing technique as an optional choice for secondary users [10]."},{"role":"assistant","content":"Conclusion: The 2008 FCC ruling in the United States on TV white space spectrum has opened up new opportunities for unlicensed operation in the TV band. TV white spaces’ availability, diverse bandwidth, and excellent propagational character- istics make them suitable for long range, low-power, and large area applications such as sensing and monitoring ap- plications, agricultural IoT applications, wireless broadband access, real-time applications, smart and connected commu- nities, smart utility applications, etc. Several new network architectures and protocols have been proposed solely tar- geting the TV white space spectrum as well. However, TV white space fragmentation, temporal diversity, and spatial diversity make it challenging to adopt them directly due to interference and coexistence issues. In this paper, we have investigated these key research challenges and provided future directions. We have then provided key insights and comparison between several protocols built on top of TV white spaces. Also, we have presented the opportunities of TV white space protocols in great details."}]
Towards Evaluation of Autonomously Generated Musical Compositions: A Comprehensive Survey
The scope of algorithmic composition includes a wide range of tasks, which consist of the creation of melodies, chords, rhythms or even lyrics, i.e., all the typical components of music. The purpose of this work is to present the output of the neural network without any modifications. The aim of the research is not to romanticize artificial intelligence and its use in algorithmic composition, but on the contrary to point out its advantages and disadvantages. We therefore start from the idea that only undistorted results carry a certain aesthetic, which is destroyed by later authorial adaptations. There are many applications that aim to create a complete model for an autonomously generated composition; systems are able to generate muzak songs, assist singers in transcribing songs or can imitate long-dead authors. But why? This is the right question that we would like to address in this paper. Subjective understanding of creativity or aesthetics differs not only within preferences (popular authors or genres), but also differs on the basis of experienced experience or socio-cultural environment. So, what do we want to achieve with such an adaptation? What is the benefit of the resulting work for the author, who can no longer evaluate this composition? And in what ways should we evaluate such a composition at all? Let’s try to paraphrase musicologist Matěj Kratochvíl here: “So far I have not found anyone who would publicly consider whether the world will somehow enrich the existence of a new composition by a dead author, which in comparison with his actual work sounds as if it was composed by a machine. So, I dare say that the music world has not gained anything new.” (Kratochvíl 2020)
Although we have proposed several different methods of analytical, musicolog- ical and aesthetic evaluation, Matěj Kratochvíl’s rhetorical question from the introductory part of this paper (Kratochvíl 2020) was far from answered. Al- though the evaluation methods of musical metacreaction offer several possibili- ties by which we can to some extent determine the impact of similar generative projects (see media evaluation and audience evaluation), the measurability of such outputs is at least problematic. So, let’s try to think again about Matěj Kratochvíl’s statement and verify the contribution that generative art can pro- vide. The emergence of models that focus on generative forms of art has brought with it, above all, the loss of illusions that art will remain the exclusive domain of man. However, generative art, and therefore artificial intelligence in general, faces many difficulties. In their reports, journalists often describe the resulting works in layman’s terms without any knowledge of cultural or scientific under- standing, which further damages the authenticity of these works. The idea of artificial intelligence as omniscient robots, which could one day exterminate and replace humanity, is fueled not only by media companies, often by commercial popularity, which aims to convince investors why their model is the real, unadul- terated artificial intelligence. For laymen, AI is a difficult topic to grasp, which is most often associated with the film hero Terminator, or with a similar theme of cult sci-fifilms. We encounter the same situation in the case of computational creativity. The media explosion of artificial intelligence hardly distinguishes between machine learning and the supposed, nowadays non-existent general ar- tificial intelligence. Firms engaged in the application of neural networks are then to some extent forced to maintain this unrealistic picture of AI, which today represents a cultural fetish rather than an object of scientific interest. If, in the case of supposed, truly omniscient artificial intelligence, we speak of the loss of illusions, then this issue points to a perhaps even more interesting perspective of creative tendencies.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Towards Evaluation of Autonomously Generated Musical Compositions: A Comprehensive Survey* and the introduction:The scope of algorithmic composition includes a wide range of tasks, which consist of the creation of melodies, chords, rhythms or even lyrics, i.e., all the typical components of music. The purpose of this work is to present the output of the neural network without any modifications. The aim of the research is not to romanticize artificial intelligence and its use in algorithmic composition, but on the contrary to point out its advantages and disadvantages. We therefore start from the idea that only undistorted results carry a certain aesthetic, which is destroyed by later authorial adaptations. There are many applications that aim to create a complete model for an autonomously generated composition; systems are able to generate muzak songs, assist singers in transcribing songs or can imitate long-dead authors. But why? This is the right question that we would like to address in this paper. Subjective understanding of creativity or aesthetics differs not only within preferences (popular authors or genres), but also differs on the basis of experienced experience or socio-cultural environment. So, what do we want to achieve with such an adaptation? What is the benefit of the resulting work for the author, who can no longer evaluate this composition? And in what ways should we evaluate such a composition at all? Let’s try to paraphrase musicologist Matěj Kratochvíl here: “So far I have not found anyone who would publicly consider whether the world will somehow enrich the existence of a new composition by a dead author, which in comparison with his actual work sounds as if it was composed by a machine. So, I dare say that the music world has not gained anything new.” (Kratochvíl 2020)"},{"role":"assistant","content":"Conclusion: Although we have proposed several different methods of analytical, musicolog- ical and aesthetic evaluation, Matěj Kratochvíl’s rhetorical question from the introductory part of this paper (Kratochvíl 2020) was far from answered. Al- though the evaluation methods of musical metacreaction offer several possibili- ties by which we can to some extent determine the impact of similar generative projects (see media evaluation and audience evaluation), the measurability of such outputs is at least problematic. So, let’s try to think again about Matěj Kratochvíl’s statement and verify the contribution that generative art can pro- vide. The emergence of models that focus on generative forms of art has brought with it, above all, the loss of illusions that art will remain the exclusive domain of man. However, generative art, and therefore artificial intelligence in general, faces many difficulties. In their reports, journalists often describe the resulting works in layman’s terms without any knowledge of cultural or scientific under- standing, which further damages the authenticity of these works. The idea of artificial intelligence as omniscient robots, which could one day exterminate and replace humanity, is fueled not only by media companies, often by commercial popularity, which aims to convince investors why their model is the real, unadul- terated artificial intelligence. For laymen, AI is a difficult topic to grasp, which is most often associated with the film hero Terminator, or with a similar theme of cult sci-fifilms. We encounter the same situation in the case of computational creativity. The media explosion of artificial intelligence hardly distinguishes between machine learning and the supposed, nowadays non-existent general ar- tificial intelligence. Firms engaged in the application of neural networks are then to some extent forced to maintain this unrealistic picture of AI, which today represents a cultural fetish rather than an object of scientific interest. If, in the case of supposed, truly omniscient artificial intelligence, we speak of the loss of illusions, then this issue points to a perhaps even more interesting perspective of creative tendencies."}]
A Scoping Review of Publicly Available Language Tasks in Clinical Natural Language Processing
Since the inception of the first Integrating Biology and the Bedside (i2b2) shared task in 2006, currently known as the National Natural Language Processing (NLP) Clinical Challenge (n2c2), the field of clinical NLP has advanced in clinical applications that rely on text from the electronic health record (EHR). Tasks with publicly available data (e.g. shared tasks) provide a new avenue for advancing the state-of-the-art using publicly available datasets in a sector that is otherwise heavily regulated and protected from sharing patient data. In an editorial approximately a decade ago, Chapman et al.[1] identified the major barriers for clinical NLP developments where shared tasks may provide a solution. At the time, some of the challenges were lack of data resources including annotation tools, benchmarking and standardized metrics, reproducibility, collaboration between the general NLP communities and health research communities, and the need for user-centered development.
The interests in introducing and participating in clinical NLP tasks are growing with more tasks surfacing each year. The breadth of tasks is also growing with topics varying from tasks with specific clinical applications to those facil- itating clinical language understanding and reasoning. It is no doubt that the field will continue to grow and attract more researchers from both general NLP domain and the clinical informatics community. We encourage future work on proposing tasks/shared tasks to overcome barriers in community collaboration, reporting transparency, and consis- tency of data preparation.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Scoping Review of Publicly Available Language Tasks in Clinical Natural Language Processing* and the introduction:Since the inception of the first Integrating Biology and the Bedside (i2b2) shared task in 2006, currently known as the National Natural Language Processing (NLP) Clinical Challenge (n2c2), the field of clinical NLP has advanced in clinical applications that rely on text from the electronic health record (EHR). Tasks with publicly available data (e.g. shared tasks) provide a new avenue for advancing the state-of-the-art using publicly available datasets in a sector that is otherwise heavily regulated and protected from sharing patient data. In an editorial approximately a decade ago, Chapman et al.[1] identified the major barriers for clinical NLP developments where shared tasks may provide a solution. At the time, some of the challenges were lack of data resources including annotation tools, benchmarking and standardized metrics, reproducibility, collaboration between the general NLP communities and health research communities, and the need for user-centered development."},{"role":"assistant","content":"Conclusion: The interests in introducing and participating in clinical NLP tasks are growing with more tasks surfacing each year. The breadth of tasks is also growing with topics varying from tasks with specific clinical applications to those facil- itating clinical language understanding and reasoning. It is no doubt that the field will continue to grow and attract more researchers from both general NLP domain and the clinical informatics community. We encourage future work on proposing tasks/shared tasks to overcome barriers in community collaboration, reporting transparency, and consis- tency of data preparation."}]
A N E XTENSIVE R EVIEW OF C OMPUTATIONAL D ANCE A UTOMATION T ECHNIQUES AND A PPLICATIONS
Efforts of combining dance and computational power can be traced back to 1967. Being a domain that needs relatively more innovation and creativity than mere following standard procedures, dance was the slowest to adopt technology. The earliest attempt was published by A. Michael Noll [2] in Dance Magazine in 1967, although New York based choreographer Merce Cunningham also did the same [3].
Creativity is considered to be a gift and no two persons can be same in this process. A field of art like Dance is considered to be entirely a creative process although the basics are very clearly specified and taught in case of Classical Dance forms. Each choreographer is known for their particular style and creative form. Using a machine to aid in this creative process has been attempted by many well-known creative artists and dance is also a domain known to accept and start experimenting with this. We have reviewed at least hundred of such research papers and articles and categorized them accordingly.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A N E XTENSIVE R EVIEW OF C OMPUTATIONAL D ANCE A UTOMATION T ECHNIQUES AND A PPLICATIONS* and the introduction:Efforts of combining dance and computational power can be traced back to 1967. Being a domain that needs relatively more innovation and creativity than mere following standard procedures, dance was the slowest to adopt technology. The earliest attempt was published by A. Michael Noll [2] in Dance Magazine in 1967, although New York based choreographer Merce Cunningham also did the same [3]."},{"role":"assistant","content":"Conclusion: Creativity is considered to be a gift and no two persons can be same in this process. A field of art like Dance is considered to be entirely a creative process although the basics are very clearly specified and taught in case of Classical Dance forms. Each choreographer is known for their particular style and creative form. Using a machine to aid in this creative process has been attempted by many well-known creative artists and dance is also a domain known to accept and start experimenting with this. We have reviewed at least hundred of such research papers and articles and categorized them accordingly."}]
T WENTY - TWO YEARS SINCE REVEALING CROSS - SITE SCRIPTING ATTACKS : A SYSTEMATIC MAPPING AND A COMPREHENSIVE SURVEY
The use of web applications through the Internet has become an indispensable mean for different business and governmental organizations to reduce costs, speed up activities, improve the quality of services and reach as many targeted people as possible. Users also get immense benefits from online provided services. However, those gains are not without risks; web applications requiring users’ registrations through input forms are preferred targets for different hacking attacks, putting their own and users’ confidential data at risks.
In this paper, we conducted a systematic mapping and a comprehensive survey studying the advancement in research to tackle XSS attacks. The study is not restricted to a period of time and covered high quality studies published since its discovery. Several studies were found in the literature but a remarkable interest is only observed in the last few years. Despite the diversity of solutions proposed over the years, XSS attacks are still prevalent and targeting new web applications and platforms. The study revealed much attention to XSS vulnerability detection instead of its repair. For securing web navigation, several defense lines should be provided. As a staring point, developers need to be aware of the consequences resulted from ignoring security practices, at the same time, effective tools enabling the automatic detection and repair of XSS vulnerabilities should be made available to facilitate their tasks. Dynamic defense techniques against XSS attacks should also be provided for protecting innocent users when new attacks occur, those techniques are still immature for providing the intended protection level against all types of XSS attacks [ 180 ]. Traditional filtering approaches become ineffective regarding the new developed web technologies, more advanced techniques need to be explored. Although the wide adoption of machine learning based techniques for the detection of XSS attacks, existing endeavors only focus on performance analysis omitting three important problems related to cyber-security communities (1) interpret ability of prediction results, (2) robustness of models against adversarial attacks and (3) suitability for integration in real-world architectures and platforms. The review also denoted a bias toward basic XSS attacks; this needs to be alleviated by advocating more research targeting other XSS attack variants, specifically WXSS and XAS that are targeting online social networks that may affect wider populations in a short period of time. Moreover, regarding the rapid development of web technologies, XSS attacks written in other scripting languages such VBScript and ActiveX or embedded in advanced web languages such as HTML5 need to properly be studied for completeness and robustness.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*T WENTY - TWO YEARS SINCE REVEALING CROSS - SITE SCRIPTING ATTACKS : A SYSTEMATIC MAPPING AND A COMPREHENSIVE SURVEY* and the introduction:The use of web applications through the Internet has become an indispensable mean for different business and governmental organizations to reduce costs, speed up activities, improve the quality of services and reach as many targeted people as possible. Users also get immense benefits from online provided services. However, those gains are not without risks; web applications requiring users’ registrations through input forms are preferred targets for different hacking attacks, putting their own and users’ confidential data at risks."},{"role":"assistant","content":"Conclusion: In this paper, we conducted a systematic mapping and a comprehensive survey studying the advancement in research to tackle XSS attacks. The study is not restricted to a period of time and covered high quality studies published since its discovery. Several studies were found in the literature but a remarkable interest is only observed in the last few years. Despite the diversity of solutions proposed over the years, XSS attacks are still prevalent and targeting new web applications and platforms. The study revealed much attention to XSS vulnerability detection instead of its repair. For securing web navigation, several defense lines should be provided. As a staring point, developers need to be aware of the consequences resulted from ignoring security practices, at the same time, effective tools enabling the automatic detection and repair of XSS vulnerabilities should be made available to facilitate their tasks. Dynamic defense techniques against XSS attacks should also be provided for protecting innocent users when new attacks occur, those techniques are still immature for providing the intended protection level against all types of XSS attacks [ 180 ]. Traditional filtering approaches become ineffective regarding the new developed web technologies, more advanced techniques need to be explored. Although the wide adoption of machine learning based techniques for the detection of XSS attacks, existing endeavors only focus on performance analysis omitting three important problems related to cyber-security communities (1) interpret ability of prediction results, (2) robustness of models against adversarial attacks and (3) suitability for integration in real-world architectures and platforms. The review also denoted a bias toward basic XSS attacks; this needs to be alleviated by advocating more research targeting other XSS attack variants, specifically WXSS and XAS that are targeting online social networks that may affect wider populations in a short period of time. Moreover, regarding the rapid development of web technologies, XSS attacks written in other scripting languages such VBScript and ActiveX or embedded in advanced web languages such as HTML5 need to properly be studied for completeness and robustness."}]
A Deep Journey into Super-resolution: A Survey
‘Everything has been said before, but since nobody listens we have to keep going back and beginning all over again.’
Single-image super-resolution is a challenging research problem with important real-life applications. The phenom- enal success of deep learning approaches has resulted in rapid growth in deep convolutional network based tech- niques for image super-resolution. A diverse set of ap- proaches have been proposed with exciting innovations in network architectures and learning methodologies. This survey provides a comprehensive analysis of existing deep- learning based methods for super-resolution. We note that the super-resolution performance has been greatly enhanced in recent years with a corresponding increase in the network complexity. Remarkably, the state-of-the-art approaches still suffer from limitations that restrict their application to key real-world scenarios ( e.g. , inadequate metrics, high model complexity, inability to handle real-life degradations). We hope this survey will attract new efforts towards the solu- tion of these crucial problems.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Deep Journey into Super-resolution: A Survey* and the introduction:‘Everything has been said before, but since nobody listens we have to keep going back and beginning all over again.’"},{"role":"assistant","content":"Conclusion: Single-image super-resolution is a challenging research problem with important real-life applications. The phenom- enal success of deep learning approaches has resulted in rapid growth in deep convolutional network based tech- niques for image super-resolution. A diverse set of ap- proaches have been proposed with exciting innovations in network architectures and learning methodologies. This survey provides a comprehensive analysis of existing deep- learning based methods for super-resolution. We note that the super-resolution performance has been greatly enhanced in recent years with a corresponding increase in the network complexity. Remarkably, the state-of-the-art approaches still suffer from limitations that restrict their application to key real-world scenarios ( e.g. , inadequate metrics, high model complexity, inability to handle real-life degradations). We hope this survey will attract new efforts towards the solu- tion of these crucial problems."}]
Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey
Driven by the needs to protect the enormous value within data and the evolution of the emerging data mining techniques, information leakage becomes a growing concern for governments, organizations and individuals [ 3 ]. Compromising the confidentiality of protected information is an information leakage incident and a prominent threat of cyber security [ 2 ], for instance, the leakage of sensitive information results in both financial and reputational damages to the organizations [ 19 ]. Thus, information leakage incidents are indeed an urgent threat that deserves the public attention.
In this survey, the ML-based stealing attack against the controlled information and the defense mechanisms are reviewed. The generalized MLBSA methodology compatible with the published work is outlined. Specifically, the MLBSA methodology uncovers how adversaries steal the controlled information in five phases, i.e. reconnaissance, data collection, feature engineering, attacking the objective, and evaluation. Based on different types of the controlled information, the literature was reviewed in three categories consisting of the controlled user activities information, the controlled ML model related information, and the controlled authentication information. The attacker is assumed to use the system without any administrative privilege. This assumption implies that user activities information was stolen by leveraging the kernel data and the sensor data both of which are beyond the protection of the application. The attack against the controlled ML model-related information is demonstrated with stealing the model description and/or stealing the training data. Similarly, keystroke data, secret keys, and password data are the examples of stealing the controlled authentication information.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey* and the introduction:Driven by the needs to protect the enormous value within data and the evolution of the emerging data mining techniques, information leakage becomes a growing concern for governments, organizations and individuals [ 3 ]. Compromising the confidentiality of protected information is an information leakage incident and a prominent threat of cyber security [ 2 ], for instance, the leakage of sensitive information results in both financial and reputational damages to the organizations [ 19 ]. Thus, information leakage incidents are indeed an urgent threat that deserves the public attention."},{"role":"assistant","content":"Conclusion: In this survey, the ML-based stealing attack against the controlled information and the defense mechanisms are reviewed. The generalized MLBSA methodology compatible with the published work is outlined. Specifically, the MLBSA methodology uncovers how adversaries steal the controlled information in five phases, i.e. reconnaissance, data collection, feature engineering, attacking the objective, and evaluation. Based on different types of the controlled information, the literature was reviewed in three categories consisting of the controlled user activities information, the controlled ML model related information, and the controlled authentication information. The attacker is assumed to use the system without any administrative privilege. This assumption implies that user activities information was stolen by leveraging the kernel data and the sensor data both of which are beyond the protection of the application. The attack against the controlled ML model-related information is demonstrated with stealing the model description and/or stealing the training data. Similarly, keystroke data, secret keys, and password data are the examples of stealing the controlled authentication information."}]
Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey
Deep Learning is a subfield of Machine Learning (ML) that leverages the capabilities of artificial neural networks to automatically learn from data. These networks are fully-differentiable computational graphs optimized via gradient descent to learn representations from raw data [ 12 ], currently being the most efficient and effective data-oriented strategy to perform several Computer Vision (CV) and Natural Language Processing (NLP) tasks.
In this survey paper, we have investigated debiasing methods targeting fairness-aware neural networks for language and vision research. We have contextualized fairness and its relationship with biases and their possible origins. We have presented the main metrics and evaluation measures for assessing the level of fairness provided by models for computer vision and natural language processing tasks, reviewing both application-specific and general-purpose measures, their proper use, applicability, and known limitations. Then, we have discussed, in depth, several debiasing methods for neural models under the perspective of a new taxonomy for the area, which is yet another contribution of this paper. We concluded with our thoughts on the most pressing fairness challenges in neural networks, calling attention for potential trends and future research directions.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey* and the introduction:Deep Learning is a subfield of Machine Learning (ML) that leverages the capabilities of artificial neural networks to automatically learn from data. These networks are fully-differentiable computational graphs optimized via gradient descent to learn representations from raw data [ 12 ], currently being the most efficient and effective data-oriented strategy to perform several Computer Vision (CV) and Natural Language Processing (NLP) tasks."},{"role":"assistant","content":"Conclusion: In this survey paper, we have investigated debiasing methods targeting fairness-aware neural networks for language and vision research. We have contextualized fairness and its relationship with biases and their possible origins. We have presented the main metrics and evaluation measures for assessing the level of fairness provided by models for computer vision and natural language processing tasks, reviewing both application-specific and general-purpose measures, their proper use, applicability, and known limitations. Then, we have discussed, in depth, several debiasing methods for neural models under the perspective of a new taxonomy for the area, which is yet another contribution of this paper. We concluded with our thoughts on the most pressing fairness challenges in neural networks, calling attention for potential trends and future research directions."}]
D EEP N EURAL N ETWORKS FOR COVID-19 D ETECTION AND D IAGNOSIS USING I MAGES AND A COUSTIC - BASED T ECHNIQUES : A R ECENT R EVIEW
The novel severe acute respiratory syndrome-related coronavirus (SARS-CoV-2) started from Wuhan, China in December 2019 and spread to all the countries worldwide. This virus caused pneumonia of unknown cytology and is named COVID-19. This infectious disease has been classified as a public health crisis of the international community concern on January 30, 2020, because of its high infectivity and mortality. The lack of successful diagnosis or preventive measures has led to a rise in the number of cases, an increase in the cost of hospitalizations and palliative treatments. Therefore, scientists and medical industries around the world incited to find a prompt and accurate detection of COVID-19 for early prevention, screening, forecasting, drug development, and contact tracing to save more time for the scientific community and healthcare expert to pass to the next diagnosis stage to reduce the death rate reverse transcription polymerase chain reaction (RT-PCR) is recommended to diagnose COVID-19. Additionally, there are studies in the literature using various imaging methods (computed tomography (CT) and X-ray). [ 1 , 2 , 3 , 4 , 5 ]. It may occur in situations that negatively affect these methods. The changes of viruses by the appearance of new mutations make the classifications a more challenging task [ 6 ]. Moreover, one of the biggest problems with COVID-19 patients is viral pneumonia (VP). Differentiating between viral and non-viral pneumonia (nVP) is not easy. Coexistence of COVID-19 and viral pneumonia can have dire consequences.
Although the RT-PCR test is considered the gold standard for COVID-19 diagnosis, it is time-consuming to make a decision because of high false-negative levels in the results. Therefore, medical imaging modalities such as chest X-ray and chest CT scans are the best alternative according to scientists. Chest X-ray radiography is of low cost and low radiation dose, it is available and easy to use in general or community hospitals. This review presents a detailed study of the existing solutions that are mainly based on DL techniques to early diagnose the COVID-19. This study gives more of an insight into the scientists’ and decision-makers’ thought processes - not only during the wave periods but also during that of the vaccination that could require real-time mass testing. The lack of data, however, is the mandatory problem to achieve efficient and real-time results. Many solutions have been presented and discussed in this review study to give more ideas to future trends and also for eventual future diseases that might suffer from the missing-data problem. We believe that with more public databases, better DL based-approaches can be developed to detect and diagnose the COVID19 accurately. Also, when policy-makers and citizens are making their best to submit to the difficult constraints of lockdown and social distancing, AI can be used to create more intelligent robots and autonomous machines to help health workforce and to reduce their workload by disinfection, working in hospitals, food distributing and helping the patients. The challenge of this solution is that people lack confidence in autonomous machines and prefer to be served by a human even if there is a risk of virus transmission. Moreover, entrusting chatbots to diagnose patients needs a large amount of medical data from experts. Also, the difference in languages from a country to another makes an already difficult task still more arduous. On the other hand, when dealing with voice analysis, there are still many challenges to be taken up. For example, until now, annotated data of patients’ voices are not publicly available for research purposes of COVID-19 detection and diagnosis. Collecting these data is mostly made in unconstrained environments (i.e. in-the-wild) using smartphones or other voice recorders. These environments are generally noisy and contain reverberation, which leads to bad quality of data and makes the diagnosis and detection of COVID-19 more challenging. Finally, one of the most important future trends is to concentrate on further decreasing the false negative rate and, as far as practicable, reducing the false positive rate by the same token to accurately differentiate viral from BP.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*D EEP N EURAL N ETWORKS FOR COVID-19 D ETECTION AND D IAGNOSIS USING I MAGES AND A COUSTIC - BASED T ECHNIQUES : A R ECENT R EVIEW* and the introduction:The novel severe acute respiratory syndrome-related coronavirus (SARS-CoV-2) started from Wuhan, China in December 2019 and spread to all the countries worldwide. This virus caused pneumonia of unknown cytology and is named COVID-19. This infectious disease has been classified as a public health crisis of the international community concern on January 30, 2020, because of its high infectivity and mortality. The lack of successful diagnosis or preventive measures has led to a rise in the number of cases, an increase in the cost of hospitalizations and palliative treatments. Therefore, scientists and medical industries around the world incited to find a prompt and accurate detection of COVID-19 for early prevention, screening, forecasting, drug development, and contact tracing to save more time for the scientific community and healthcare expert to pass to the next diagnosis stage to reduce the death rate reverse transcription polymerase chain reaction (RT-PCR) is recommended to diagnose COVID-19. Additionally, there are studies in the literature using various imaging methods (computed tomography (CT) and X-ray). [ 1 , 2 , 3 , 4 , 5 ]. It may occur in situations that negatively affect these methods. The changes of viruses by the appearance of new mutations make the classifications a more challenging task [ 6 ]. Moreover, one of the biggest problems with COVID-19 patients is viral pneumonia (VP). Differentiating between viral and non-viral pneumonia (nVP) is not easy. Coexistence of COVID-19 and viral pneumonia can have dire consequences."},{"role":"assistant","content":"Conclusion: Although the RT-PCR test is considered the gold standard for COVID-19 diagnosis, it is time-consuming to make a decision because of high false-negative levels in the results. Therefore, medical imaging modalities such as chest X-ray and chest CT scans are the best alternative according to scientists. Chest X-ray radiography is of low cost and low radiation dose, it is available and easy to use in general or community hospitals. This review presents a detailed study of the existing solutions that are mainly based on DL techniques to early diagnose the COVID-19. This study gives more of an insight into the scientists’ and decision-makers’ thought processes - not only during the wave periods but also during that of the vaccination that could require real-time mass testing. The lack of data, however, is the mandatory problem to achieve efficient and real-time results. Many solutions have been presented and discussed in this review study to give more ideas to future trends and also for eventual future diseases that might suffer from the missing-data problem. We believe that with more public databases, better DL based-approaches can be developed to detect and diagnose the COVID19 accurately. Also, when policy-makers and citizens are making their best to submit to the difficult constraints of lockdown and social distancing, AI can be used to create more intelligent robots and autonomous machines to help health workforce and to reduce their workload by disinfection, working in hospitals, food distributing and helping the patients. The challenge of this solution is that people lack confidence in autonomous machines and prefer to be served by a human even if there is a risk of virus transmission. Moreover, entrusting chatbots to diagnose patients needs a large amount of medical data from experts. Also, the difference in languages from a country to another makes an already difficult task still more arduous. On the other hand, when dealing with voice analysis, there are still many challenges to be taken up. For example, until now, annotated data of patients’ voices are not publicly available for research purposes of COVID-19 detection and diagnosis. Collecting these data is mostly made in unconstrained environments (i.e. in-the-wild) using smartphones or other voice recorders. These environments are generally noisy and contain reverberation, which leads to bad quality of data and makes the diagnosis and detection of COVID-19 more challenging. Finally, one of the most important future trends is to concentrate on further decreasing the false negative rate and, as far as practicable, reducing the false positive rate by the same token to accurately differentiate viral from BP."}]
Anaphora and Coreference Resolution: A Review
A discourse is a collocated group of sentences which convey a clear understanding only when read together. The etymology of anaphora is ana (Greek for back) and pheri (Greek for to bear), which in simple terms means repetition. In computational linguistics, anaphora is typically defined as references to items mentioned earlier in the discourse or “pointing back” reference as described by ( Mitkov , 1999 ). The most prevalent type of anaphora in natural language is the pronominal anaphora ( Lappin and Leass , 1994 ). Coreference, as the term suggests refers to words or phrases referring to a single unique entity in the world. Anaphoric and co-referent entities themselves form a subset of the broader term “discourse parsing” ( Soricut and Marcu , 2003 ), which is crucial for full text understanding.
Our survey presents an exhaustive overview of the entity resolution field, which forms a core component of natural language processing research. In this survey, we put forth a detailed account of the types of references and the important constraints for entity resolution with the aim of establishing the bread scope of the task. We also clarify the boundaries between the tasks of coreference resolution and anaphora resolution for more focussed research progress in the future. In addition, we also attempt to compare the predominantly used evaluation metrics. We observe that though there are multiple datasets available, the state-of-the-art methods have not been evaluated on them. With the spirit of encouraging more exhaustive evaluations, we also provide an account on the datasets released for the task.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Anaphora and Coreference Resolution: A Review* and the introduction:A discourse is a collocated group of sentences which convey a clear understanding only when read together. The etymology of anaphora is ana (Greek for back) and pheri (Greek for to bear), which in simple terms means repetition. In computational linguistics, anaphora is typically defined as references to items mentioned earlier in the discourse or “pointing back” reference as described by ( Mitkov , 1999 ). The most prevalent type of anaphora in natural language is the pronominal anaphora ( Lappin and Leass , 1994 ). Coreference, as the term suggests refers to words or phrases referring to a single unique entity in the world. Anaphoric and co-referent entities themselves form a subset of the broader term “discourse parsing” ( Soricut and Marcu , 2003 ), which is crucial for full text understanding."},{"role":"assistant","content":"Conclusion: Our survey presents an exhaustive overview of the entity resolution field, which forms a core component of natural language processing research. In this survey, we put forth a detailed account of the types of references and the important constraints for entity resolution with the aim of establishing the bread scope of the task. We also clarify the boundaries between the tasks of coreference resolution and anaphora resolution for more focussed research progress in the future. In addition, we also attempt to compare the predominantly used evaluation metrics. We observe that though there are multiple datasets available, the state-of-the-art methods have not been evaluated on them. With the spirit of encouraging more exhaustive evaluations, we also provide an account on the datasets released for the task."}]
Mining the online infosphere: A survey
Online infosphere 1 is the term corresponding to the Internet becoming a virtual parallel world formed from billions of networks of artificial life at different scales ranging from tiny pieces of software to massive AI tools running a factory or driving a car. The motivations for this are diverse, seeking to both help mankind and harm it.
In this survey we have presented a critical rundown on the evolution of the online infosphere by depicting some of the research areas that are becoming very crucial at current times. We started our discussion with a view of the infosphere as a collaborative platform, with a dedicated focus on Wikipedia. Wikipedia, the freely available and one of the largest knowledge base, containing a wide variety of information has been a primary focus of an extensive research so far. In this survey we have presented a detailed account of the works on article quality monitoring, editor behaviour and their retention and malicious activities like vandalism.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Mining the online infosphere: A survey* and the introduction:Online infosphere 1 is the term corresponding to the Internet becoming a virtual parallel world formed from billions of networks of artificial life at different scales ranging from tiny pieces of software to massive AI tools running a factory or driving a car. The motivations for this are diverse, seeking to both help mankind and harm it."},{"role":"assistant","content":"Conclusion: In this survey we have presented a critical rundown on the evolution of the online infosphere by depicting some of the research areas that are becoming very crucial at current times. We started our discussion with a view of the infosphere as a collaborative platform, with a dedicated focus on Wikipedia. Wikipedia, the freely available and one of the largest knowledge base, containing a wide variety of information has been a primary focus of an extensive research so far. In this survey we have presented a detailed account of the works on article quality monitoring, editor behaviour and their retention and malicious activities like vandalism."}]
Generalizing to Unseen Domains: A Survey on Domain Generalization
M ACHINE learning (ML) has achieved remarkable suc- cess in various areas, such as computer vision, natural language processing, and healthcare. The goal of ML is to design a model that can learn general and predictive knowledge from training data, and then apply the model to new (test) data. Traditional ML models are trained based on the i.i.d. assumption that training and testing data are identically and independently distributed. However, this assumption does not always hold in reality. When the probability distributions of training data and testing data are different, the performance of ML models often get deteriorated due to domain distribution gaps. Collecting the data of all possible domains to train ML models is expensive and even prohibitively impossible. Therefore, enhancing the generalization ability of ML models is important in both industry and academic fields.
Generalization has always been an important research topic in machine learning research. In this article, we review the domain generalization areas by providing in-depth analysis of existing methods, datasets and applications. Then, we thoroughly analyze the methods. Based on our analysis, we provide several potential research challenges that could be the directions of future research. We hope that this survey can provide useful insights to researchers of interest and inspire more progress in the future.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Generalizing to Unseen Domains: A Survey on Domain Generalization* and the introduction:M ACHINE learning (ML) has achieved remarkable suc- cess in various areas, such as computer vision, natural language processing, and healthcare. The goal of ML is to design a model that can learn general and predictive knowledge from training data, and then apply the model to new (test) data. Traditional ML models are trained based on the i.i.d. assumption that training and testing data are identically and independently distributed. However, this assumption does not always hold in reality. When the probability distributions of training data and testing data are different, the performance of ML models often get deteriorated due to domain distribution gaps. Collecting the data of all possible domains to train ML models is expensive and even prohibitively impossible. Therefore, enhancing the generalization ability of ML models is important in both industry and academic fields."},{"role":"assistant","content":"Conclusion: Generalization has always been an important research topic in machine learning research. In this article, we review the domain generalization areas by providing in-depth analysis of existing methods, datasets and applications. Then, we thoroughly analyze the methods. Based on our analysis, we provide several potential research challenges that could be the directions of future research. We hope that this survey can provide useful insights to researchers of interest and inspire more progress in the future."}]
How to Certify Machine Learning Based Safety-critical Systems?
Machine Learning (ML) is drastically changing the way we interact with the world. We are now using software applications powered by ML in critical as- pects of our daily lives; from finance, energy, to health and transportation. Thanks to frequent innovations in domains like Deep Learning (DL) and Re- inforcement Learning (RL), the adoption of ML is expected to keep rising and the economic benefits of systems powered by ML is forecast to reach 30.6 Billions \$ by 2024 1 . However, the integration of ML in systems is not with- out risks, especially in safety-critical systems such as avionic or automotive, where any mistake can lead to catastrophic events 2 . Therefore, before applying and deploying any machine learning based components into a safety-critical system, these components need to be certified.
This paper provides a comprehensive overview of certification challenges for ML based safety-critical systems. We conducted a systematic review of the literature pertaining to Robustness , Uncertainty , Explainability , Verification , Safe Reinforcement Learning and Direct Certification . We identified gaps in this literature and discussed about current limitations and future research opportunities. With this paper, we hope to provide the research community with a full view of certification challenges and stimulate more collaborations between academia and industry.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*How to Certify Machine Learning Based Safety-critical Systems?* and the introduction:Machine Learning (ML) is drastically changing the way we interact with the world. We are now using software applications powered by ML in critical as- pects of our daily lives; from finance, energy, to health and transportation. Thanks to frequent innovations in domains like Deep Learning (DL) and Re- inforcement Learning (RL), the adoption of ML is expected to keep rising and the economic benefits of systems powered by ML is forecast to reach 30.6 Billions \$ by 2024 1 . However, the integration of ML in systems is not with- out risks, especially in safety-critical systems such as avionic or automotive, where any mistake can lead to catastrophic events 2 . Therefore, before applying and deploying any machine learning based components into a safety-critical system, these components need to be certified."},{"role":"assistant","content":"Conclusion: This paper provides a comprehensive overview of certification challenges for ML based safety-critical systems. We conducted a systematic review of the literature pertaining to Robustness , Uncertainty , Explainability , Verification , Safe Reinforcement Learning and Direct Certification . We identified gaps in this literature and discussed about current limitations and future research opportunities. With this paper, we hope to provide the research community with a full view of certification challenges and stimulate more collaborations between academia and industry."}]
Recommender Systems for the Internet of Things: A Survey
Recent advances in identification technologies, such as wireless sensor networks, RFID, and nan- otechnology, have empowered a multitude of phys- ical things with lifted computing ability [ 100 ]. Ev- eryday things have become recognizable, address- able, and controllable over the Internet. The poten- tial for seamlessly integrating the physical and cy- berspace has created enormous business opportuni- ties. However, finding an effective mechanism for searching and recommending things remains a sig- nificant challenge. Recommender systems present a critical stage in promoting and investigating the advantages of IoT. They generally include proce- dures that facilitate consumer choices based on their preferences. Given the huge amount of infor- mation that is available through IoT, users are likely to receive numerous recommendations for services or products. Knowledge of user preferences is es- sential for the building of any recommender system. The most important feature of such a recommender system for IoT is its ability to exploit knowledge of human behaviour and other IoT data in order to produce accurate recommendations. With bil- lions of IoT resources connected to and accessible on the Internet, a key question is: How can IoT data be effectively exploited as a source to build recommendation systems?
RSIoTs have become a crucial tool that can sup- port users in various decision-making activities. In this article, we have provided an extensive re- view of the most notable works to date on RSIoTs. We have proposed a classification framework of three categories for the organizing and dividing of existing publications and then highlighted each category in more detail. We have also discussed the limitations of using IoT for recommendation tasks, as well as the most important requirements for the RSIoT. We have provided a detailed account of the techniques of each system and its applica- tions. In addition, we have detailed some of the most common recommendations that are provided by building an RSIoT. Both IoT and RSs have been ongoing hot research topics in recent years. While existing studies have established a solid foundation for research into RSIoT, our analysis has generated several promising directions for future research:
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Recommender Systems for the Internet of Things: A Survey* and the introduction:Recent advances in identification technologies, such as wireless sensor networks, RFID, and nan- otechnology, have empowered a multitude of phys- ical things with lifted computing ability [ 100 ]. Ev- eryday things have become recognizable, address- able, and controllable over the Internet. The poten- tial for seamlessly integrating the physical and cy- berspace has created enormous business opportuni- ties. However, finding an effective mechanism for searching and recommending things remains a sig- nificant challenge. Recommender systems present a critical stage in promoting and investigating the advantages of IoT. They generally include proce- dures that facilitate consumer choices based on their preferences. Given the huge amount of infor- mation that is available through IoT, users are likely to receive numerous recommendations for services or products. Knowledge of user preferences is es- sential for the building of any recommender system. The most important feature of such a recommender system for IoT is its ability to exploit knowledge of human behaviour and other IoT data in order to produce accurate recommendations. With bil- lions of IoT resources connected to and accessible on the Internet, a key question is: How can IoT data be effectively exploited as a source to build recommendation systems?"},{"role":"assistant","content":"Conclusion: RSIoTs have become a crucial tool that can sup- port users in various decision-making activities. In this article, we have provided an extensive re- view of the most notable works to date on RSIoTs. We have proposed a classification framework of three categories for the organizing and dividing of existing publications and then highlighted each category in more detail. We have also discussed the limitations of using IoT for recommendation tasks, as well as the most important requirements for the RSIoT. We have provided a detailed account of the techniques of each system and its applica- tions. In addition, we have detailed some of the most common recommendations that are provided by building an RSIoT. Both IoT and RSs have been ongoing hot research topics in recent years. While existing studies have established a solid foundation for research into RSIoT, our analysis has generated several promising directions for future research:"}]
Generative Adversarial Networks: A Survey Towards Private and Secure Applications
The technological breakthrough brought by Generative Adversarial Networks (GAN) has rapidly produced a revolutionary impact on machine learning and its related fields, and this impact has already flourished to various of research areas and applications. As a powerful generative framework, GAN has significantly promoted many applications with complex tasks, such as image generation, super-resolution, text data manipulations, etc . Most recently, exploiting GAN to work out elegant solutions to severe privacy and security problems becomes increasingly popular in both academia and industry due to its game theoretic optimization strategy. This survey aims to provide a comprehensive review and an in-depth summary of the state-of-the-art technologies and
This survey intensively reviews the state-of-the-art approaches using GAN for privacy and security in a broad spectrum of applications, including image generation, video event detection, records publishing, distributed learning, malware detection, fraud detection, and so on. For the different purposes of attack and defense, these existing approaches establish problem formulation based on the variants of GAN framework, taking into account attack success rate, classification/prediction accuracy, data utility, and other performance metrics. After a thorough analysis, the unsolved challenges and promising research directions are provided for further discussion from perspectives of application scenario, model design, and data utilization.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Generative Adversarial Networks: A Survey Towards Private and Secure Applications* and the introduction:The technological breakthrough brought by Generative Adversarial Networks (GAN) has rapidly produced a revolutionary impact on machine learning and its related fields, and this impact has already flourished to various of research areas and applications. As a powerful generative framework, GAN has significantly promoted many applications with complex tasks, such as image generation, super-resolution, text data manipulations, etc . Most recently, exploiting GAN to work out elegant solutions to severe privacy and security problems becomes increasingly popular in both academia and industry due to its game theoretic optimization strategy. This survey aims to provide a comprehensive review and an in-depth summary of the state-of-the-art technologies and"},{"role":"assistant","content":"Conclusion: This survey intensively reviews the state-of-the-art approaches using GAN for privacy and security in a broad spectrum of applications, including image generation, video event detection, records publishing, distributed learning, malware detection, fraud detection, and so on. For the different purposes of attack and defense, these existing approaches establish problem formulation based on the variants of GAN framework, taking into account attack success rate, classification/prediction accuracy, data utility, and other performance metrics. After a thorough analysis, the unsolved challenges and promising research directions are provided for further discussion from perspectives of application scenario, model design, and data utilization."}]
Multimodal Machine Learning: A Survey and Taxonomy
T HE world surrounding us involves multiple modalities — we see objects, hear sounds, feel texture, smell odors, and so on. In general terms, a modality refers to the way in which something happens or is experienced. Most people associate the word modality with the sensory modalities which represent our primary channels of communication and sensation, such as vision or touch. A research problem or dataset is therefore characterized as multimodal when it includes multiple such modalities. In this paper we focus primarily, but not exclusively, on three modalities: natural language which can be both written or spoken; visual signals which are often represented with images or videos; and vocal signals which encode sounds and para-verbal information such as prosody and vocal expressions.
As part of this survey, we introduced a taxonomy of multi- modal machine learning: representation, translation, fusion, alignment, and co-learning. Some of them such as fusion have been studied for a long time, but more recent interest in representation and translation have led to a large number of new multimodal algorithms and exciting multimodal applications.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Multimodal Machine Learning: A Survey and Taxonomy* and the introduction:T HE world surrounding us involves multiple modalities — we see objects, hear sounds, feel texture, smell odors, and so on. In general terms, a modality refers to the way in which something happens or is experienced. Most people associate the word modality with the sensory modalities which represent our primary channels of communication and sensation, such as vision or touch. A research problem or dataset is therefore characterized as multimodal when it includes multiple such modalities. In this paper we focus primarily, but not exclusively, on three modalities: natural language which can be both written or spoken; visual signals which are often represented with images or videos; and vocal signals which encode sounds and para-verbal information such as prosody and vocal expressions."},{"role":"assistant","content":"Conclusion: As part of this survey, we introduced a taxonomy of multi- modal machine learning: representation, translation, fusion, alignment, and co-learning. Some of them such as fusion have been studied for a long time, but more recent interest in representation and translation have led to a large number of new multimodal algorithms and exciting multimodal applications."}]
A Review on Facial Micro-Expressions Analysis: Datasets, Features and Metrics
F ACIAL expression research has a long history and accel- erated through the 1970s. The modern theory on basic emotions by Ekman et al [1], [2], [3] has generated more re- search than any other in the psychology of emotion [4]. They outline 7 universal facial expressions: happy, sad, anger, fear, surprise, disgust and contempt, as the universality of emotion. When an emotional episode is triggered, there is an impulse which may induce one or more of these expressions of emotion.
We have presented a comprehensive review on datasets, features and metrics for micro-expressions analysis. The ultimate goal of this paper is to provide new insights and recommendations to advancing the micro-expression analysis research. We have provided a good guidelines for beginners and a detailed challenges and recommendations for those who are already working in this area. In addition, we contribute to the research by addressing the effect of resolutions on different feature types and introducing the new emotional classes for SAMM.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Review on Facial Micro-Expressions Analysis: Datasets, Features and Metrics* and the introduction:F ACIAL expression research has a long history and accel- erated through the 1970s. The modern theory on basic emotions by Ekman et al [1], [2], [3] has generated more re- search than any other in the psychology of emotion [4]. They outline 7 universal facial expressions: happy, sad, anger, fear, surprise, disgust and contempt, as the universality of emotion. When an emotional episode is triggered, there is an impulse which may induce one or more of these expressions of emotion."},{"role":"assistant","content":"Conclusion: We have presented a comprehensive review on datasets, features and metrics for micro-expressions analysis. The ultimate goal of this paper is to provide new insights and recommendations to advancing the micro-expression analysis research. We have provided a good guidelines for beginners and a detailed challenges and recommendations for those who are already working in this area. In addition, we contribute to the research by addressing the effect of resolutions on different feature types and introducing the new emotional classes for SAMM."}]
A Survey among Network Operators on BGP Prefix Hijacking
BGP prefix hijacking 101. Autonomous Systems (ASes) use the Border Gateway Protocol (BGP) [15] to advertise address space (as IPv4/IPv6 network prefixes) and establish inter-domain routes in the Internet. BGP is a distributed pro- tocol, lacking authentication of advertised routes. As a re- sult, an AS is able to advertise illegitimate routes for IP pre- fixes it does not own. These advertisements propagate and “pollute” many ASes, or even the entire Internet, affecting service availability, integrity, and confidentiality of commu- nications. This phenomenon, called BGP prefix hijacking , is frequently observed [25], and can be caused by router mis- configurations [1,2] or malicious attacks [3,22,25].
In this work, to increase community understanding of ex- isting BGP hijacking defenses and the needs of network op- erators, we presented the results of a survey of 75 network operators around the world.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey among Network Operators on BGP Prefix Hijacking* and the introduction:BGP prefix hijacking 101. Autonomous Systems (ASes) use the Border Gateway Protocol (BGP) [15] to advertise address space (as IPv4/IPv6 network prefixes) and establish inter-domain routes in the Internet. BGP is a distributed pro- tocol, lacking authentication of advertised routes. As a re- sult, an AS is able to advertise illegitimate routes for IP pre- fixes it does not own. These advertisements propagate and “pollute” many ASes, or even the entire Internet, affecting service availability, integrity, and confidentiality of commu- nications. This phenomenon, called BGP prefix hijacking , is frequently observed [25], and can be caused by router mis- configurations [1,2] or malicious attacks [3,22,25]."},{"role":"assistant","content":"Conclusion: In this work, to increase community understanding of ex- isting BGP hijacking defenses and the needs of network op- erators, we presented the results of a survey of 75 network operators around the world."}]
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
D EEP L EARNING [1] is providing major breakthroughs in solving the problems that have withstood many attempts of machine learning and artificial intelligence com- munity in the past. As a result, it is currently being used to decipher hard scientific problems at an unprecedented scale, e.g. in reconstruction of brain circuits [2]; analysis of muta- tions in DNA [3]; prediction of structure-activity of potential drug molecules [4], and analyzing the particle accelerator data [5] [6]. Deep neural networks have also become the preferred choice to solve many challenging tasks in speech recognition [7] and natural language understanding [8].
Deep learning is at the heart of the current rise of ma- chine learning and artificial intelligence. In the field of Computer Vision, it has become the work horse for appli- cations ranging from self-driving cars to surveillance and security. Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently lead to a large influx of contributions in this direction. This article presents the first comprehensive survey on adver- sarial attacks on deep learning in Computer Vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey* and the introduction:D EEP L EARNING [1] is providing major breakthroughs in solving the problems that have withstood many attempts of machine learning and artificial intelligence com- munity in the past. As a result, it is currently being used to decipher hard scientific problems at an unprecedented scale, e.g. in reconstruction of brain circuits [2]; analysis of muta- tions in DNA [3]; prediction of structure-activity of potential drug molecules [4], and analyzing the particle accelerator data [5] [6]. Deep neural networks have also become the preferred choice to solve many challenging tasks in speech recognition [7] and natural language understanding [8]."},{"role":"assistant","content":"Conclusion: Deep learning is at the heart of the current rise of ma- chine learning and artificial intelligence. In the field of Computer Vision, it has become the work horse for appli- cations ranging from self-driving cars to surveillance and security. Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently lead to a large influx of contributions in this direction. This article presents the first comprehensive survey on adver- sarial attacks on deep learning in Computer Vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction."}]
A Review of Anonymization for Healthcare Data Iyiola E. Olatunji ∗ , Jens Rauch 2 , Matthias Katzensteiner 3 , and Megha Khosla 1
With the increasing adoption of healthcare information technology (HIT) by medical institutions, the generation and capture of healthcare-related data have been increasing rapidly in the past years. The application of artificial intelligence (AI) techniques already gives a glimpse of potential improvements ranging from lung cancer nodules detection in CT scans to disease prediction and treatment [61, 80, 97]. The challenge though is that these AI models are usually data hungry and require large amounts of data for training. Health care data, on the other hand, contains highly sensitive patient information and cannot be easily shared. The reluctance behind releasing data query/analysis tools build on health care data can be further justified by the fundamental law of information recovery [23] which states that when a data source is queried multiple times and it returns overly accurate information for each query, the underlying data can be reconstructed partially or in full. Therefore, health data need to be protected against such leakage to ensure patient’s privacy.
In this paper, we provided a comprehensive review of anonymization models and techniques applicable for relational and graph-based health care data. Besides, we studied possible attacks on anonymized data and empirically demonstrated reconstruction attack on MIMIC-III data. Finally we discussed existing defense mechanisms while giving an overview of existing anonymization tools. We believe that our comprehensive review covering different perspectives on anonymization will assist researchers and practitioners in selecting relevant an ony miz ation techniques based on the data type, desired privacy level, information loss, and possible adversarial behavior.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Review of Anonymization for Healthcare Data Iyiola E. Olatunji ∗ , Jens Rauch 2 , Matthias Katzensteiner 3 , and Megha Khosla 1* and the introduction:With the increasing adoption of healthcare information technology (HIT) by medical institutions, the generation and capture of healthcare-related data have been increasing rapidly in the past years. The application of artificial intelligence (AI) techniques already gives a glimpse of potential improvements ranging from lung cancer nodules detection in CT scans to disease prediction and treatment [61, 80, 97]. The challenge though is that these AI models are usually data hungry and require large amounts of data for training. Health care data, on the other hand, contains highly sensitive patient information and cannot be easily shared. The reluctance behind releasing data query/analysis tools build on health care data can be further justified by the fundamental law of information recovery [23] which states that when a data source is queried multiple times and it returns overly accurate information for each query, the underlying data can be reconstructed partially or in full. Therefore, health data need to be protected against such leakage to ensure patient’s privacy."},{"role":"assistant","content":"Conclusion: In this paper, we provided a comprehensive review of anonymization models and techniques applicable for relational and graph-based health care data. Besides, we studied possible attacks on anonymized data and empirically demonstrated reconstruction attack on MIMIC-III data. Finally we discussed existing defense mechanisms while giving an overview of existing anonymization tools. We believe that our comprehensive review covering different perspectives on anonymization will assist researchers and practitioners in selecting relevant an ony miz ation techniques based on the data type, desired privacy level, information loss, and possible adversarial behavior."}]
A S URVEY AND I MPLEMENTATION OF P ERFORMANCE M ETRICS FOR S ELF -O RGANIZED M APS
Self-Organizing Maps (SOM) [ 1 ] have been used for almost 40 years across various application domains such as biology, geology, healthcare, industry [ 2 , 3 ] and humanities [ 4 ] as an interpretable tool to explore, cluster and visualize high-dimensional data sets. In every application, practitioners need to know whether they can trust the resulting mapping, and perform model selection to select algorithm parameters (e.g. the map size, learning rate and number of iterations). Concretely, two questions need to be answered:
In this paper, we reviewed various internal and external performance metrics for SOM and introduced the SOMperf Python module, enabling practitioners to easily evaluate their models. Future work perspectives include the computation of per-unit metrics, a SOM visualization module, as well as distance functions between self-organized models. In addition, other more recent SOM quality metrics could be implemented, such as the map embedding accuracy [23].
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A S URVEY AND I MPLEMENTATION OF P ERFORMANCE M ETRICS FOR S ELF -O RGANIZED M APS* and the introduction:Self-Organizing Maps (SOM) [ 1 ] have been used for almost 40 years across various application domains such as biology, geology, healthcare, industry [ 2 , 3 ] and humanities [ 4 ] as an interpretable tool to explore, cluster and visualize high-dimensional data sets. In every application, practitioners need to know whether they can trust the resulting mapping, and perform model selection to select algorithm parameters (e.g. the map size, learning rate and number of iterations). Concretely, two questions need to be answered:"},{"role":"assistant","content":"Conclusion: In this paper, we reviewed various internal and external performance metrics for SOM and introduced the SOMperf Python module, enabling practitioners to easily evaluate their models. Future work perspectives include the computation of per-unit metrics, a SOM visualization module, as well as distance functions between self-organized models. In addition, other more recent SOM quality metrics could be implemented, such as the map embedding accuracy [23]."}]
Meta-Learning in Neural Networks: A Survey
Contemporary machine learning models are typically trained from scratch for a specific task using a fixed learn- ing algorithm designed by hand. Deep learning-based ap- proaches have seen great successes in a variety of fields [1]–[3]. However there are clear limitations [4]. For example, successes have largely been in areas where vast quantities of data can be collected or simulated, and where huge compute resources are available. This excludes many applications where data is intrinsically rare or expensive [5], or compute resources are unavailable [6], [7].
The field of meta-learning has recently seen a rapid growth in interest. This has come with some level of confusion, with regards to how it relates to neighbouring fields, what it can be applied to, and how it can be benchmarked. In this survey we have sought to clarify these issues by thoroughly surveying the area both from a methodological point of view – which we broke down into a taxonomy of meta-representation, meta-optimizer and meta-objective; and from an application point of view. We hope that this survey will help newcomers and practitioners to orient themselves in this growing field, as well as highlight op- portunities for future research.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Meta-Learning in Neural Networks: A Survey* and the introduction:Contemporary machine learning models are typically trained from scratch for a specific task using a fixed learn- ing algorithm designed by hand. Deep learning-based ap- proaches have seen great successes in a variety of fields [1]–[3]. However there are clear limitations [4]. For example, successes have largely been in areas where vast quantities of data can be collected or simulated, and where huge compute resources are available. This excludes many applications where data is intrinsically rare or expensive [5], or compute resources are unavailable [6], [7]."},{"role":"assistant","content":"Conclusion: The field of meta-learning has recently seen a rapid growth in interest. This has come with some level of confusion, with regards to how it relates to neighbouring fields, what it can be applied to, and how it can be benchmarked. In this survey we have sought to clarify these issues by thoroughly surveying the area both from a methodological point of view – which we broke down into a taxonomy of meta-representation, meta-optimizer and meta-objective; and from an application point of view. We hope that this survey will help newcomers and practitioners to orient themselves in this growing field, as well as highlight op- portunities for future research."}]
An Overview of Distant Supervision for Relation Extraction with a Focus on Denoising and Pre-training Methods
Relation extraction (RE), a subtask of information extraction, is a foundational task in natural lan- guage processing (NLP). The RE task is to deter- mine a relationship between two distinct entities from text, producing fact triples in the form [ head , relation , tail ] or, as referred to in some works, [ sub- ject , predicate , object ]. For example, after reading the Wikipedia page on Noam Chomsky, we learn that Noam was born in Philadelphia, Pennsylvania, which corresponds to the fact triple [ Noam Chom- sky , born in , Philadelphia ]. Fact triples are foun- dational to human knowledge and play a key role in many downstream NLP tasks such as question- answering, search queries, and knowledge-graph completion ( Xu et al. , 2016 ; Lin et al. , 2015 ; Li et al. , 2014 ).
In this survey, we reviewed the relation extraction task with a focus on distant supervision. We traced the history of RE methods by discussing exemplary works and highlighting shortcomings to contextual- ize progress. We noted the key differences between pattern-based methods, statistical-based methods, neural-based methods, and finally, large language model-based methods. We conducted an overview of popular RE datasets and discussed some limita- tions of both corpus-based and instance-based RE evaluation.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*An Overview of Distant Supervision for Relation Extraction with a Focus on Denoising and Pre-training Methods* and the introduction:Relation extraction (RE), a subtask of information extraction, is a foundational task in natural lan- guage processing (NLP). The RE task is to deter- mine a relationship between two distinct entities from text, producing fact triples in the form [ head , relation , tail ] or, as referred to in some works, [ sub- ject , predicate , object ]. For example, after reading the Wikipedia page on Noam Chomsky, we learn that Noam was born in Philadelphia, Pennsylvania, which corresponds to the fact triple [ Noam Chom- sky , born in , Philadelphia ]. Fact triples are foun- dational to human knowledge and play a key role in many downstream NLP tasks such as question- answering, search queries, and knowledge-graph completion ( Xu et al. , 2016 ; Lin et al. , 2015 ; Li et al. , 2014 )."},{"role":"assistant","content":"Conclusion: In this survey, we reviewed the relation extraction task with a focus on distant supervision. We traced the history of RE methods by discussing exemplary works and highlighting shortcomings to contextual- ize progress. We noted the key differences between pattern-based methods, statistical-based methods, neural-based methods, and finally, large language model-based methods. We conducted an overview of popular RE datasets and discussed some limita- tions of both corpus-based and instance-based RE evaluation."}]
Scene Flow Estimation: A Survey
Scene flow is a three-dimensional motion field of the surface in world space, or in other words, it shows the three-dimensional displacement vector of each surface point between two frames. As most computer vision issues are, scene flow estimation is essentially an ill-posed energy minimization problem with three unknowns. Prior knowledge in multiple aspects is required to make the energy function solvable with just a few pairs of images. Hence, it’s essential to fully make use of information from the data source and to weigh different prior knowledge for a better performance.
This paper presents a comprehensive and up-to-date survey on both scene flow estimation methods and the evaluation methodologies for the first time after 17 years since scene flow was introduced. We have discussed most of the estimation methods so researchers could have a clear view of this field and get inspired for their studies of interest. The representative methods are highlighted so the differences between these methods are clear, and the similarities between top-tier methods can be seen as a tendency for modification. The widely used benchmarks have been analyzed and compared, so are multiple evaluation protocols. This paper provides sufficient information for researchers to choose the appropriate datasets and protocols for evaluating performance of their algorithms.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Scene Flow Estimation: A Survey* and the introduction:Scene flow is a three-dimensional motion field of the surface in world space, or in other words, it shows the three-dimensional displacement vector of each surface point between two frames. As most computer vision issues are, scene flow estimation is essentially an ill-posed energy minimization problem with three unknowns. Prior knowledge in multiple aspects is required to make the energy function solvable with just a few pairs of images. Hence, it’s essential to fully make use of information from the data source and to weigh different prior knowledge for a better performance."},{"role":"assistant","content":"Conclusion: This paper presents a comprehensive and up-to-date survey on both scene flow estimation methods and the evaluation methodologies for the first time after 17 years since scene flow was introduced. We have discussed most of the estimation methods so researchers could have a clear view of this field and get inspired for their studies of interest. The representative methods are highlighted so the differences between these methods are clear, and the similarities between top-tier methods can be seen as a tendency for modification. The widely used benchmarks have been analyzed and compared, so are multiple evaluation protocols. This paper provides sufficient information for researchers to choose the appropriate datasets and protocols for evaluating performance of their algorithms."}]
Few-shot Learning: A Survey
“Can machines think [ 121 ]? ” This is the question raised in Alan Turing’s seminal paper entitled
Few-Shot Learning (FSL) is an important machine learning problem to facilitate real artificial intelligence. In this survey, we provide a comprehensive and systematic review of FSL. We first formally define FSL, and discuss the relatedness and difference of FSL with respect to relevant learning problems such as semi-supervised learning, imbalanced learning, transfer learning and meta-learning. Then, we reveal the core issues of FSL and classify existing works into a unified taxonomy in terms of how they deal with the core issues, i. e., data, model and algorithm. Using the prior knowledge, data augments the supervised experience of FSL, model constrains the hypothesis space of FSL, and algorithm alters the search of parameter in hypothesis space to solve FSL. We perform extensive literature review from the birth of FSL to the most recent published ones, and categorizes them in into different categories with thorough comparison. As possible future works for FSL, we provide possible directions in terms of problem setup, techniques, applications and theory, hoping these can inspire of solving of FSL problem and achieve human-like machines one day.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Few-shot Learning: A Survey* and the introduction:“Can machines think [ 121 ]? ” This is the question raised in Alan Turing’s seminal paper entitled"},{"role":"assistant","content":"Conclusion: Few-Shot Learning (FSL) is an important machine learning problem to facilitate real artificial intelligence. In this survey, we provide a comprehensive and systematic review of FSL. We first formally define FSL, and discuss the relatedness and difference of FSL with respect to relevant learning problems such as semi-supervised learning, imbalanced learning, transfer learning and meta-learning. Then, we reveal the core issues of FSL and classify existing works into a unified taxonomy in terms of how they deal with the core issues, i. e., data, model and algorithm. Using the prior knowledge, data augments the supervised experience of FSL, model constrains the hypothesis space of FSL, and algorithm alters the search of parameter in hypothesis space to solve FSL. We perform extensive literature review from the birth of FSL to the most recent published ones, and categorizes them in into different categories with thorough comparison. As possible future works for FSL, we provide possible directions in terms of problem setup, techniques, applications and theory, hoping these can inspire of solving of FSL problem and achieve human-like machines one day."}]
A survey on haptic technologies for mobile augmented reality
AR and MAR have atracted interest from both industry and academia in the last decade. MAR enhances the real world of a mobile user with computer-generated virtual content. AR applications combine real and virtual objects in a physical environment, are interactive in real time and display an augmented view. Te advances in mobile computing, computer vision, and networking have enabled the AR ecosystem. Due to the mobile nature of MAR applications, they tend to run on mobile or wearable devices such as smartphones, tablets, smart-glasses. Tese device provide user’s mobility, but at the cost of constrained resources such as computing-power, and energy. Te computational constrains of these mobile devices limit the performance and design of AR applications in the mobile environment. Terefore, cloud infrastructures ([ 30 ], [ 36 ]), computing-ofoading solutions ([ 34 ], [ 63 ]), service providers, cloudlets ([ 19 ]) and Fog computing ([ 18 ]) continue to deploy innovative services to provide a real time AR experience ([ 21 ]). Microsof 1 , Facebook 2 and Apple 3 have shown their interest in AR applications and they believe in the viability of this technology. Te current trend of mobile AR applications has afected the mobile market. Some well-known commercial MAR applications such as Pokemon GO 4 are location-based AR mobile games.
In this survey, we depict the state-of-the-art of several haptic devices and their capabilities as wearables in MAR ecosystem. Furthermore, we classify the haptic feedback devices by their sensory nature and their design characteristics, such as mid-air, and exoskeleton. We start with a brief description of the main features of haptic devices and the importance of audio and visual as non-haptic devices in enhancing the UX and improving the overall interaction performance. We analyze the main characteristics of the proposed devices, and their applicability as wearables for MAR applications. Although there are many works and commercial products, an afordable, portable and simple approach for haptic wearable devices is still missing. Moreover, the fdelity of these devices is limited to one scenario such as surface/texture rendering, grasping, or pushing. Te combination of more haptic devices to achieve beter feedback has been done by several authors but the size, or difcult implementation hinders their deployment in mobile environments, where the scenarios and circumstances surrounding the user can change. With this work we aim to provide a beter understanding of mechanisms, challenges and future possibilities of haptic feedback in the MAR feld.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A survey on haptic technologies for mobile augmented reality* and the introduction:AR and MAR have atracted interest from both industry and academia in the last decade. MAR enhances the real world of a mobile user with computer-generated virtual content. AR applications combine real and virtual objects in a physical environment, are interactive in real time and display an augmented view. Te advances in mobile computing, computer vision, and networking have enabled the AR ecosystem. Due to the mobile nature of MAR applications, they tend to run on mobile or wearable devices such as smartphones, tablets, smart-glasses. Tese device provide user’s mobility, but at the cost of constrained resources such as computing-power, and energy. Te computational constrains of these mobile devices limit the performance and design of AR applications in the mobile environment. Terefore, cloud infrastructures ([ 30 ], [ 36 ]), computing-ofoading solutions ([ 34 ], [ 63 ]), service providers, cloudlets ([ 19 ]) and Fog computing ([ 18 ]) continue to deploy innovative services to provide a real time AR experience ([ 21 ]). Microsof 1 , Facebook 2 and Apple 3 have shown their interest in AR applications and they believe in the viability of this technology. Te current trend of mobile AR applications has afected the mobile market. Some well-known commercial MAR applications such as Pokemon GO 4 are location-based AR mobile games."},{"role":"assistant","content":"Conclusion: In this survey, we depict the state-of-the-art of several haptic devices and their capabilities as wearables in MAR ecosystem. Furthermore, we classify the haptic feedback devices by their sensory nature and their design characteristics, such as mid-air, and exoskeleton. We start with a brief description of the main features of haptic devices and the importance of audio and visual as non-haptic devices in enhancing the UX and improving the overall interaction performance. We analyze the main characteristics of the proposed devices, and their applicability as wearables for MAR applications. Although there are many works and commercial products, an afordable, portable and simple approach for haptic wearable devices is still missing. Moreover, the fdelity of these devices is limited to one scenario such as surface/texture rendering, grasping, or pushing. Te combination of more haptic devices to achieve beter feedback has been done by several authors but the size, or difcult implementation hinders their deployment in mobile environments, where the scenarios and circumstances surrounding the user can change. With this work we aim to provide a beter understanding of mechanisms, challenges and future possibilities of haptic feedback in the MAR feld."}]
A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges
M ACHINE learning models commonly make the closed- set assumption, where the test data is drawn i.i.d from the same distribution as the training data. Yet in practice, all types of test input data—even those on which we have not trained the classifier—can be encountered. Unfortunately, models can assign misleading confidence values for unseen test samples [ 1 ], [ 2 ], [ 3 ], [ 4 ], [ 5 ]. This leads to concerns about the reliability of classifiers, particularly for safety-critical ap- plications [ 6 ]. In literature, several fields attempt to address the issue of identifying the unknowns/anomalies/out-of- distribution data in the open-world setting. In particular, the problems of anomaly detection (AD), Novelty Detection
In many applications, it is not feasible to model all kinds of classes occurring during testing; thus, scenarios existing in domains such as OOD detection, OSR, ND (one-class learning), and AD become ubiquitous. Up to this time, these domains, in spite of having the same intention and a large intersection, have been followed roughly independently by researchers.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges* and the introduction:M ACHINE learning models commonly make the closed- set assumption, where the test data is drawn i.i.d from the same distribution as the training data. Yet in practice, all types of test input data—even those on which we have not trained the classifier—can be encountered. Unfortunately, models can assign misleading confidence values for unseen test samples [ 1 ], [ 2 ], [ 3 ], [ 4 ], [ 5 ]. This leads to concerns about the reliability of classifiers, particularly for safety-critical ap- plications [ 6 ]. In literature, several fields attempt to address the issue of identifying the unknowns/anomalies/out-of- distribution data in the open-world setting. In particular, the problems of anomaly detection (AD), Novelty Detection"},{"role":"assistant","content":"Conclusion: In many applications, it is not feasible to model all kinds of classes occurring during testing; thus, scenarios existing in domains such as OOD detection, OSR, ND (one-class learning), and AD become ubiquitous. Up to this time, these domains, in spite of having the same intention and a large intersection, have been followed roughly independently by researchers."}]
A Brief Survey on Deep Learning Based Data Hiding, Steganography and Watermarking
Seeing is not always believing, i . e . a natural-looking image can contain secret information that is invisible to the gen- eral public. Data hiding enables concealing a secret message within a transport medium, such as a digital image, and its essential property lies in im perce pti bil it y for achieving the fundamental goal of being hidden. With easy access to the Internet and gaining popularity of the social media platform, digital media, such as image or video, has become the most commonly used host for secure data transfer in applications ranging from secret communication, copy-right protection to content authentication. Data hiding schemes are character- ized by three requirements: capacity regarding the embed- ded payload, security in terms of being undetectable by ste- ganalysis, robustness against distortions in the transmission channel. There is a trade-off among the above three require- ments, namely capacity , security and robustness [Kadhim et al. , 2019; Zhang et al. , 2020a] as depicted in Figure 1. For example, a hiding algorithm that is capacity-oriented is of- ten subject to low security and robustness. Generally speak- ing, data hiding aims to hide more information given no extra constraint is applied. Secure steganography and robust wa- termarking, as the term suggests, prioritize security and ro- bustness, respectively. Most traditional data hiding methods are carried out under the distortion-coding framework, which allocates different distortions to different cover elements and
Deep learning based data hiding has become an emerging field and is drawing increasing attention. Our work conducts a brief survey on this topic by first outlining three basic meta- architectures. We further discuss the challenges of deep hid- ing in various applications, such as steganography, light field messaging and watermarking. Finally, we discuss its impact on the field of adversarial attack and vice versa. A joint in- vestigation of data hiding and adversarial attack will be an interesting direction with possible new insights.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Brief Survey on Deep Learning Based Data Hiding, Steganography and Watermarking* and the introduction:Seeing is not always believing, i . e . a natural-looking image can contain secret information that is invisible to the gen- eral public. Data hiding enables concealing a secret message within a transport medium, such as a digital image, and its essential property lies in im perce pti bil it y for achieving the fundamental goal of being hidden. With easy access to the Internet and gaining popularity of the social media platform, digital media, such as image or video, has become the most commonly used host for secure data transfer in applications ranging from secret communication, copy-right protection to content authentication. Data hiding schemes are character- ized by three requirements: capacity regarding the embed- ded payload, security in terms of being undetectable by ste- ganalysis, robustness against distortions in the transmission channel. There is a trade-off among the above three require- ments, namely capacity , security and robustness [Kadhim et al. , 2019; Zhang et al. , 2020a] as depicted in Figure 1. For example, a hiding algorithm that is capacity-oriented is of- ten subject to low security and robustness. Generally speak- ing, data hiding aims to hide more information given no extra constraint is applied. Secure steganography and robust wa- termarking, as the term suggests, prioritize security and ro- bustness, respectively. Most traditional data hiding methods are carried out under the distortion-coding framework, which allocates different distortions to different cover elements and"},{"role":"assistant","content":"Conclusion: Deep learning based data hiding has become an emerging field and is drawing increasing attention. Our work conducts a brief survey on this topic by first outlining three basic meta- architectures. We further discuss the challenges of deep hid- ing in various applications, such as steganography, light field messaging and watermarking. Finally, we discuss its impact on the field of adversarial attack and vice versa. A joint in- vestigation of data hiding and adversarial attack will be an interesting direction with possible new insights."}]
A Comprehensive Survey on Automatic Knowledge Graph Construction
Knowledge graphs (KGs) provide well-organized human knowledge for applications like search engines [ 1 ], recommendation systems [ 2 ], and question answering [ 3 ].
With this paper, we delivered a comprehensive survey on the topic of knowledge graph construction. Specifically, we reviewed the tasks, methods, challenges, and related resources used to construct, refine, and integrate KGs from various data types in different scenarios. To probe into the essential topics for the big data environment, we systematically presented the paragon models for obtaining fine-grained concepts (entity typing), dealing with low-resource knowledge (extraction tasks in few- shot scenarios), understanding large linguistic objects (document-level relation extraction), complex reasoning (logic and interpretable reasoning) and handling conditional structures (temporal and general conditions) in knowledge graphs. Moreover, we provided briefs on practical KG toolkits and projects. In conclusion, knowledge graph construction has become a critical topic for enabling human intelligence in AI applications. In the future, the research community will certainly be searching for more paradigms to empower KGs with wisdom in massive heterogeneous, autonomous, complex, and evolving data environments while enhancing collaborations between knowledge communities.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Comprehensive Survey on Automatic Knowledge Graph Construction* and the introduction:Knowledge graphs (KGs) provide well-organized human knowledge for applications like search engines [ 1 ], recommendation systems [ 2 ], and question answering [ 3 ]."},{"role":"assistant","content":"Conclusion: With this paper, we delivered a comprehensive survey on the topic of knowledge graph construction. Specifically, we reviewed the tasks, methods, challenges, and related resources used to construct, refine, and integrate KGs from various data types in different scenarios. To probe into the essential topics for the big data environment, we systematically presented the paragon models for obtaining fine-grained concepts (entity typing), dealing with low-resource knowledge (extraction tasks in few- shot scenarios), understanding large linguistic objects (document-level relation extraction), complex reasoning (logic and interpretable reasoning) and handling conditional structures (temporal and general conditions) in knowledge graphs. Moreover, we provided briefs on practical KG toolkits and projects. In conclusion, knowledge graph construction has become a critical topic for enabling human intelligence in AI applications. In the future, the research community will certainly be searching for more paradigms to empower KGs with wisdom in massive heterogeneous, autonomous, complex, and evolving data environments while enhancing collaborations between knowledge communities."}]
Using Meta-heuristics and Machine Learning for Software Optimization of Parallel Computing Systems: A Systematic Literature Review
Traditionally, parallel computing [69] systems have been used for scientific and technical com- puting. Usually scientific and engineering computational problems are complex and resource intensive. To e ffi ciently solve these problems, utilization of parallel computing systems that may comprise multiple processing units is needed. The emergence of multi-core and many-core pro- cessors in the last decade led to the pervasiveness of parallel computing systems from embedded systems, personal computers, to data centers and supercomputers. While in the past parallel com- puting was a focus of only a small group of scientists and engineers at supercomputing centers, nowadays virtually programmers of all systems are exposed to parallel processors that comprise multiple or many cores [46].
In this article, we have conducted a systematic literature review that describes approaches that use machine learning and meta-heuristics for software optimization of parallel computing systems. We have classified approaches based on the software life-cycle activities at compile-time and run-time, including the code optimization and generation, scheduling, and adaptation. We have discussed the shortcomings of existing approaches and provided recommendations for future research directions.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Using Meta-heuristics and Machine Learning for Software Optimization of Parallel Computing Systems: A Systematic Literature Review* and the introduction:Traditionally, parallel computing [69] systems have been used for scientific and technical com- puting. Usually scientific and engineering computational problems are complex and resource intensive. To e ffi ciently solve these problems, utilization of parallel computing systems that may comprise multiple processing units is needed. The emergence of multi-core and many-core pro- cessors in the last decade led to the pervasiveness of parallel computing systems from embedded systems, personal computers, to data centers and supercomputers. While in the past parallel com- puting was a focus of only a small group of scientists and engineers at supercomputing centers, nowadays virtually programmers of all systems are exposed to parallel processors that comprise multiple or many cores [46]."},{"role":"assistant","content":"Conclusion: In this article, we have conducted a systematic literature review that describes approaches that use machine learning and meta-heuristics for software optimization of parallel computing systems. We have classified approaches based on the software life-cycle activities at compile-time and run-time, including the code optimization and generation, scheduling, and adaptation. We have discussed the shortcomings of existing approaches and provided recommendations for future research directions."}]
A Survey on Echo Chambers on Social Media: Description, Detection and Mitigation
Having access to verified and trusted information is cru- cial in the midst of the COVID-19 pandemic, one of the most significant health crises (Mallah et al. 2021) in recent history. Exposure to misinformation on social media has been linked to COVID-19 vaccine hesitancy (Loomba et al. 2021), the belief that 5G towers spread the virus (Ahmed et al. 2020b), the misconception that a COVID-19 vac- cine candidate caused the death of trial participants 1 , and a widely held view that the virus is a conspiracy or a bioweapon (Douglas 2021). These beliefs threaten the re- sponse to the pandemic and promote actions that can lead to the spread of the virus. In this regard, the Alan Tur- ing Institute (Seger et al. 2020) classified epistemic secu- rity as a fundamental challenge for society when facing a situation that requires taking collective action to respond to crises (e.g., global pandemics) or complex challenges (e.g., climate change). They define epistemic security as reliably preventing threats to the production, distribution, consump- tion, and assessment of reliable information within a soci- ety. Echo chambers on social media are identified (Seger et al. 2020) as one of the core threats to epistemic security as they can drastically increase the spread and even creation of misinformation on social media (Del Vicario et al. 2019, 2016a; Zollo et al. 2017; Zollo and Quattrociocchi 2018). The presence of misinformation on social media is a well- documented problem (Shu et al. 2020; Wu et al. 2019). So- cial media is a prominent source of news and information about COVID-19 and other current events for most of us. Currently, more than half of adults in the US say that they get their news from social media (Shearer and Mitchell 2021).
While the emergence of echo chambers can seem to be an unstoppable wave, we must realize that there is hope for a better information ecosystem. We showed that echo cham- bers are largely a byproduct of recommender systems. As such, what has been manufactured by these systems can like- wise be deconstructed by these systems. Social media may not currently live up to its’ promise of bringing us closer to- gether and fostering better conversations presently. The fu- ture does not have to be this way – through research and a structured strategy a less polarized world is possible.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Echo Chambers on Social Media: Description, Detection and Mitigation* and the introduction:Having access to verified and trusted information is cru- cial in the midst of the COVID-19 pandemic, one of the most significant health crises (Mallah et al. 2021) in recent history. Exposure to misinformation on social media has been linked to COVID-19 vaccine hesitancy (Loomba et al. 2021), the belief that 5G towers spread the virus (Ahmed et al. 2020b), the misconception that a COVID-19 vac- cine candidate caused the death of trial participants 1 , and a widely held view that the virus is a conspiracy or a bioweapon (Douglas 2021). These beliefs threaten the re- sponse to the pandemic and promote actions that can lead to the spread of the virus. In this regard, the Alan Tur- ing Institute (Seger et al. 2020) classified epistemic secu- rity as a fundamental challenge for society when facing a situation that requires taking collective action to respond to crises (e.g., global pandemics) or complex challenges (e.g., climate change). They define epistemic security as reliably preventing threats to the production, distribution, consump- tion, and assessment of reliable information within a soci- ety. Echo chambers on social media are identified (Seger et al. 2020) as one of the core threats to epistemic security as they can drastically increase the spread and even creation of misinformation on social media (Del Vicario et al. 2019, 2016a; Zollo et al. 2017; Zollo and Quattrociocchi 2018). The presence of misinformation on social media is a well- documented problem (Shu et al. 2020; Wu et al. 2019). So- cial media is a prominent source of news and information about COVID-19 and other current events for most of us. Currently, more than half of adults in the US say that they get their news from social media (Shearer and Mitchell 2021)."},{"role":"assistant","content":"Conclusion: While the emergence of echo chambers can seem to be an unstoppable wave, we must realize that there is hope for a better information ecosystem. We showed that echo cham- bers are largely a byproduct of recommender systems. As such, what has been manufactured by these systems can like- wise be deconstructed by these systems. Social media may not currently live up to its’ promise of bringing us closer to- gether and fostering better conversations presently. The fu- ture does not have to be this way – through research and a structured strategy a less polarized world is possible."}]
A Systematic Literature Review of Empiricism and Norms of Reporting in Computing Education Research Literature
From 2009 to 2015, the number of bachelor’s degrees in Computer Science increased by 74 percent, while overall growth across all fields of study only rose by 16 percent [ 50 ]. Taulbee’s 2019 [ 75 ] report shows an increase from a recent minimum of under 10,000 bachelor’s degrees conferred in 2009 to over 30,000 degrees in 2019 (see Figure 1B [ 75 ]). It is undeniable that there has been a rapid increase in interest in computer science courses at higher-education institutions. We see 1 similar demand in K-12 with CSforAll initiatives. Educators have struggled with how to cope with new and exacerbated challenges in computing education - ranging from how to scale up to handle new enrollments to which teaching techniques are best during this time of growth. The COVID-19 pandemic has added additional investigation into the challenges of online instruction.
Our research goal was to characterize the reporting of empiricism in Computing Education Re- search literature by identifying whether publications include content necessary for researchers to perform replications, meta-analyses, and theory building. This systematic literature review sum- marizes the type of papers and studies included during 2014 and 2015 in the SIGCSE TS, ICER, ITiCSE, TOCE, and CSE venues. A majority of the accepted papers report empirical work. How- ever those papers do not consistently follow reporting norms. We have provided suggestions to authors and reviewers to move the community forward in publishing high-quality empirical work that can lead to meta-analysis and theory building.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Systematic Literature Review of Empiricism and Norms of Reporting in Computing Education Research Literature* and the introduction:From 2009 to 2015, the number of bachelor’s degrees in Computer Science increased by 74 percent, while overall growth across all fields of study only rose by 16 percent [ 50 ]. Taulbee’s 2019 [ 75 ] report shows an increase from a recent minimum of under 10,000 bachelor’s degrees conferred in 2009 to over 30,000 degrees in 2019 (see Figure 1B [ 75 ]). It is undeniable that there has been a rapid increase in interest in computer science courses at higher-education institutions. We see 1 similar demand in K-12 with CSforAll initiatives. Educators have struggled with how to cope with new and exacerbated challenges in computing education - ranging from how to scale up to handle new enrollments to which teaching techniques are best during this time of growth. The COVID-19 pandemic has added additional investigation into the challenges of online instruction."},{"role":"assistant","content":"Conclusion: Our research goal was to characterize the reporting of empiricism in Computing Education Re- search literature by identifying whether publications include content necessary for researchers to perform replications, meta-analyses, and theory building. This systematic literature review sum- marizes the type of papers and studies included during 2014 and 2015 in the SIGCSE TS, ICER, ITiCSE, TOCE, and CSE venues. A majority of the accepted papers report empirical work. How- ever those papers do not consistently follow reporting norms. We have provided suggestions to authors and reviewers to move the community forward in publishing high-quality empirical work that can lead to meta-analysis and theory building."}]
Did Chatbots Miss Their ’Apollo Moment’ ∗ ? A Survey of the Potential, Gaps and Lessons from Using Collaboration Assistants During COVID-19
COVID-19 1 is a global pandemic which started in China in the winter of 2019 and has spread around the world with over 100 million cases and killing more than two million people by Feb 2021 [WHO, 2021]. As the disease has progressed, new hot spots of the disease have emerged first in South-East Asia, then Europe and then in US, South America and South Asia. The disease has evolved and regions around the world have also switched their responses frequently while waiting for an effective vaccine to be developed and widely available for lasting cure. The impact of COVID pandemic has var- ied globally over geography and time, as measured by num- ber of cases and deaths, depending on demographics of the local population as well as the public health policies imple- mented in response. A compilation of resources can be found at [Srivastava, 2020].
COVID19 caused a major disruption in the lives of people around the world and they were looking for help with deci- sions in all aspects of their lives. At this juncture, chatbots as the AI technology for providing personalized decision sup- port at scale, was most needed. In this paper, we reviewed the range of methods available to build them and capabilities they can offer. We then looked at how chatbots were positioned for benefit in health and the limited evidence that existed be- fore COVID of their impact. COVID triggered launching of disease-specific new chatbots as well as accelerated adoption of existing one in customer care and mental health. However, most of them worked in simple scenarios and raised questions about usability, effectiveness and handling of user privacy. We identified gaps from the experience and drew lessons that can be used for future health exigencies.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Did Chatbots Miss Their ’Apollo Moment’ ∗ ? A Survey of the Potential, Gaps and Lessons from Using Collaboration Assistants During COVID-19* and the introduction:COVID-19 1 is a global pandemic which started in China in the winter of 2019 and has spread around the world with over 100 million cases and killing more than two million people by Feb 2021 [WHO, 2021]. As the disease has progressed, new hot spots of the disease have emerged first in South-East Asia, then Europe and then in US, South America and South Asia. The disease has evolved and regions around the world have also switched their responses frequently while waiting for an effective vaccine to be developed and widely available for lasting cure. The impact of COVID pandemic has var- ied globally over geography and time, as measured by num- ber of cases and deaths, depending on demographics of the local population as well as the public health policies imple- mented in response. A compilation of resources can be found at [Srivastava, 2020]."},{"role":"assistant","content":"Conclusion: COVID19 caused a major disruption in the lives of people around the world and they were looking for help with deci- sions in all aspects of their lives. At this juncture, chatbots as the AI technology for providing personalized decision sup- port at scale, was most needed. In this paper, we reviewed the range of methods available to build them and capabilities they can offer. We then looked at how chatbots were positioned for benefit in health and the limited evidence that existed be- fore COVID of their impact. COVID triggered launching of disease-specific new chatbots as well as accelerated adoption of existing one in customer care and mental health. However, most of them worked in simple scenarios and raised questions about usability, effectiveness and handling of user privacy. We identified gaps from the experience and drew lessons that can be used for future health exigencies."}]
Domain Adaptation for Visual Applications: A Comprehensive Survey
While huge volumes of unlabeled data are generated and made available in many domains, the cost of acquiring data labels remains high. To overcome the burden of annotation, alternative solutions have been proposed in the literature in order to exploit available unlabeled data from the domain (referred to as semi-supervised learning), or labeled data or models available in similar domains (referred to as transfer learning). Domain Adaptation (DA) is a particular case of transfer learning (TL) that leverages labeled data in one or more related source domains, to learn a classifier for unseen or unlabeled data in a target domain, where in general it is assumed that the task is the same, i.e . class labels shared with the source domains. The domains are assumed to be related, but not identical in which case it becomes a standard machine learning (ML) problem that assumes the test data is drawn from the same distribution as the training data. When this assumption is not verified, i.e . the distributions on training and test set do not match, the performance at test time can be significantly degraded.
This paper attempted to provide an overview of different visual domain adaptation solutions, including shallow methods and more recent deep models. We grouped the methods both by their similarity concerning the prob- lem (homogeneous versus heterogeneous data, unsupervised versus semi-supervised scenario) and the solution proposed (feature transformation, instance reweighing, deep models, online learning). We also reviewed meth- ods that solve domain adaptation in the case of heterogeneous data as well as approaches that addresses com- puter vision problems beyond image classification, such as object detection or multi-object tracking. Finally, we ended the paper situating domain adaptation within a larger context relating it to other transfer learning techniques as well as to traditional machine learning approaches.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Domain Adaptation for Visual Applications: A Comprehensive Survey* and the introduction:While huge volumes of unlabeled data are generated and made available in many domains, the cost of acquiring data labels remains high. To overcome the burden of annotation, alternative solutions have been proposed in the literature in order to exploit available unlabeled data from the domain (referred to as semi-supervised learning), or labeled data or models available in similar domains (referred to as transfer learning). Domain Adaptation (DA) is a particular case of transfer learning (TL) that leverages labeled data in one or more related source domains, to learn a classifier for unseen or unlabeled data in a target domain, where in general it is assumed that the task is the same, i.e . class labels shared with the source domains. The domains are assumed to be related, but not identical in which case it becomes a standard machine learning (ML) problem that assumes the test data is drawn from the same distribution as the training data. When this assumption is not verified, i.e . the distributions on training and test set do not match, the performance at test time can be significantly degraded."},{"role":"assistant","content":"Conclusion: This paper attempted to provide an overview of different visual domain adaptation solutions, including shallow methods and more recent deep models. We grouped the methods both by their similarity concerning the prob- lem (homogeneous versus heterogeneous data, unsupervised versus semi-supervised scenario) and the solution proposed (feature transformation, instance reweighing, deep models, online learning). We also reviewed meth- ods that solve domain adaptation in the case of heterogeneous data as well as approaches that addresses com- puter vision problems beyond image classification, such as object detection or multi-object tracking. Finally, we ended the paper situating domain adaptation within a larger context relating it to other transfer learning techniques as well as to traditional machine learning approaches."}]
State Representation Learning for Control: An Overview
Robotics control and artificial intelligence (AI) in a broad perspective heavily rely on the availability of compact and expressive representations of the sensor data. Designing such representations has long been performed manually by the designer, but deep learning now provides a general framework to learn such representations from data. This is particularly interesting for robotics where multiple sensors (such as cameras) can provide very high dimensional data, while the robot objective can often be expressed in a much lower dimensional space (such as the 3D position of an object in a manipulation task). This low dimensional representation, frequently called the state of the system, has the crucial role of encoding essential information (for a given task) while discarding the many irrelevant aspects of the original data.
We reviewed State Representation Learning algorithms that are designed to find a way to compress high-dimensional observations data into a low and meaningful dimensional space for controlled systems. These models only require observations made by the system, the actions performed and optionally the reward of an associated task. We presented the various constraints that can be used in this objective: such representation should contain enough information to be able to reproduce the input observation; it should contain enough information about the dynamics of the environment to predict the future state given an action, or reversely to predict actions given two states; it should contain information related to a task reward; it should follow various constraints based on a priori knowledge directly on the representation space, e.g., using physics laws or common sense priors.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*State Representation Learning for Control: An Overview* and the introduction:Robotics control and artificial intelligence (AI) in a broad perspective heavily rely on the availability of compact and expressive representations of the sensor data. Designing such representations has long been performed manually by the designer, but deep learning now provides a general framework to learn such representations from data. This is particularly interesting for robotics where multiple sensors (such as cameras) can provide very high dimensional data, while the robot objective can often be expressed in a much lower dimensional space (such as the 3D position of an object in a manipulation task). This low dimensional representation, frequently called the state of the system, has the crucial role of encoding essential information (for a given task) while discarding the many irrelevant aspects of the original data."},{"role":"assistant","content":"Conclusion: We reviewed State Representation Learning algorithms that are designed to find a way to compress high-dimensional observations data into a low and meaningful dimensional space for controlled systems. These models only require observations made by the system, the actions performed and optionally the reward of an associated task. We presented the various constraints that can be used in this objective: such representation should contain enough information to be able to reproduce the input observation; it should contain enough information about the dynamics of the environment to predict the future state given an action, or reversely to predict actions given two states; it should contain information related to a task reward; it should follow various constraints based on a priori knowledge directly on the representation space, e.g., using physics laws or common sense priors."}]
Meta Learning for Natural Language Processing: A Survey
Recently, deep learning (DL) based natural lan- guage processing (NLP) has been one of the re- search mainstreams and yields significant perfor- mance improvement in many NLP problems. How- ever, DL models are data-hungry. The downside limits such models’ application to different domains, languages, countries, or styles because collecting in-genre data for model training are costly.
This paper investigates how meta-learning is used in NLP applications. We review the task construction settings (Section 3 ), the commonly used methods including learning to initialize, learning to compare and neural architecture search (Section 4 ), and high- light research directions that go beyond improving performance (Section 5 ). We hope this paper will encourage more researchers in the NLP community to work on meta-learning.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Meta Learning for Natural Language Processing: A Survey* and the introduction:Recently, deep learning (DL) based natural lan- guage processing (NLP) has been one of the re- search mainstreams and yields significant perfor- mance improvement in many NLP problems. How- ever, DL models are data-hungry. The downside limits such models’ application to different domains, languages, countries, or styles because collecting in-genre data for model training are costly."},{"role":"assistant","content":"Conclusion: This paper investigates how meta-learning is used in NLP applications. We review the task construction settings (Section 3 ), the commonly used methods including learning to initialize, learning to compare and neural architecture search (Section 4 ), and high- light research directions that go beyond improving performance (Section 5 ). We hope this paper will encourage more researchers in the NLP community to work on meta-learning."}]
A N O PEN R EVIEW OF O PEN R EVIEW : A C RITICAL A NALYSIS OF THE M ACHINE L EARNING C ONFERENCE R EVIEW P ROCESS
Over the last decade, mainstream machine learning conferences have been strained by a deluge of conference paper submissions. At ICLR, for example, the number of submissions has grown by an order of magnitude within the last 5 years alone. Furthermore, the influx of researchers from disparate fields has led to a diverse range of perspectives and opinions that often conflict when it comes to reviewing and accepting papers. This has created an environment where the legitimacy and randomness of the review process is a common topic of discussion. Do conference reviews consistently identify high quality work? Or has review degenerated into censorship?
We find the level of reproducibility at ICLR (66% in 2020) to be higher than we expect when considering the much lower acceptance rate (26.5%), which seemingly contradicts the notion that reviews are “random.” Nonetheless, many authors find large swings in reviews as they resubmit papers to different conferences and find it difficult to identify a home venue where their ideas feel respected. We speculate that the perceived randomness of conference reviews is the result of several factors. First, differences in paper matching and bidding systems used by different conferences can sway the population of reviewers that are recommended to bid on an article, resulting in a major source of inter-conference randomness that is not represented in the above intra-conference study. Second, the influx of researchers from disparate backgrounds means that the value system of a paper’s reviewers is often mismatched with that of its authors.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A N O PEN R EVIEW OF O PEN R EVIEW : A C RITICAL A NALYSIS OF THE M ACHINE L EARNING C ONFERENCE R EVIEW P ROCESS* and the introduction:Over the last decade, mainstream machine learning conferences have been strained by a deluge of conference paper submissions. At ICLR, for example, the number of submissions has grown by an order of magnitude within the last 5 years alone. Furthermore, the influx of researchers from disparate fields has led to a diverse range of perspectives and opinions that often conflict when it comes to reviewing and accepting papers. This has created an environment where the legitimacy and randomness of the review process is a common topic of discussion. Do conference reviews consistently identify high quality work? Or has review degenerated into censorship?"},{"role":"assistant","content":"Conclusion: We find the level of reproducibility at ICLR (66% in 2020) to be higher than we expect when considering the much lower acceptance rate (26.5%), which seemingly contradicts the notion that reviews are “random.” Nonetheless, many authors find large swings in reviews as they resubmit papers to different conferences and find it difficult to identify a home venue where their ideas feel respected. We speculate that the perceived randomness of conference reviews is the result of several factors. First, differences in paper matching and bidding systems used by different conferences can sway the population of reviewers that are recommended to bid on an article, resulting in a major source of inter-conference randomness that is not represented in the above intra-conference study. Second, the influx of researchers from disparate backgrounds means that the value system of a paper’s reviewers is often mismatched with that of its authors."}]
Deep Learning for Iris Recognition: A Review
Biological patterns, such as the face, voice, fingerprint, iris, and finger vein, have replaced traditional methods like keys, passwords, and access cards as the most reliable and effective human identification. Biological patterns, includ- ing face, voice, fingerprint, iris, and finger vein, are widely used for personal identification. Existing studies [ 1 , 2 ] have proven that among the aforemen- tioned biological patterns, the iris pattern is the most accurate and secure form for personal identification due to its significant advantages: (a) Unique: there are not any iris having the same physical characteristic as others, even if they come from the same person or identical twins; (b) Stability: the iris is formed during childhood, and it generally maintains unchangeable physi- cal characteristics throughout life; (c) Informative: the iris has rich texture information such as spots, stripes, filaments and coronas [ 2 ]. (d) Safety: Since the iris is located in a circular area under the surface of the eye between the black pupil and the white sclera, it is rarely disturbed by external factors. As a result, it is difficult to forge the iris pattern; (e) Contactless: Iris Recog- nition (IR) is more hygienic than biometrics that requires contact, such as fingerprint recognition [ 3 ]. Due to these advantages, IR has been widely used in identification [ 4 ].
This paper provides a comprehensive review of deep learning-based IR, bridg- ing the lack of a comprehensive review related to deep learning in the field of IR. A total of 120 papers have been collected to support this work. The pur- pose of this paper is to discuss deep learning-based IR tasks. The paper begins with some information related to IR, including an analysis of IR background in Section. 1 and related reviews of an introduction to commonly used public datasets in Section. 2 . The survey reviews sections immediately follow with the most important work of this paper. In Section. 3 , we discuss the recogni- tion task of deep learning in IR, and the process of the identification task can be divided into non-end-to-end and end-to-end processes. In the non-end-to- end process, we discuss it according to the process, including pre-processing, feature extraction and matching. In the end-to-end process, we discuss it in terms of typical and novel neural networks, respectively. U-Net and FCN show their excellent performance on the segmentation task. In addition, the PAD task on IR is also studied. Moreover, the YOLO model achieves good results on the iris localization task. GAN performs well in iris image enhancement. There are some other tasks in which deep learning also plays an important role. All these studies achieved excellent performance. This survey concludes with a summary of some typical challenges of IR and suggests potential directions for IR. They can inspire future research in the field of IR.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Deep Learning for Iris Recognition: A Review* and the introduction:Biological patterns, such as the face, voice, fingerprint, iris, and finger vein, have replaced traditional methods like keys, passwords, and access cards as the most reliable and effective human identification. Biological patterns, includ- ing face, voice, fingerprint, iris, and finger vein, are widely used for personal identification. Existing studies [ 1 , 2 ] have proven that among the aforemen- tioned biological patterns, the iris pattern is the most accurate and secure form for personal identification due to its significant advantages: (a) Unique: there are not any iris having the same physical characteristic as others, even if they come from the same person or identical twins; (b) Stability: the iris is formed during childhood, and it generally maintains unchangeable physi- cal characteristics throughout life; (c) Informative: the iris has rich texture information such as spots, stripes, filaments and coronas [ 2 ]. (d) Safety: Since the iris is located in a circular area under the surface of the eye between the black pupil and the white sclera, it is rarely disturbed by external factors. As a result, it is difficult to forge the iris pattern; (e) Contactless: Iris Recog- nition (IR) is more hygienic than biometrics that requires contact, such as fingerprint recognition [ 3 ]. Due to these advantages, IR has been widely used in identification [ 4 ]."},{"role":"assistant","content":"Conclusion: This paper provides a comprehensive review of deep learning-based IR, bridg- ing the lack of a comprehensive review related to deep learning in the field of IR. A total of 120 papers have been collected to support this work. The pur- pose of this paper is to discuss deep learning-based IR tasks. The paper begins with some information related to IR, including an analysis of IR background in Section. 1 and related reviews of an introduction to commonly used public datasets in Section. 2 . The survey reviews sections immediately follow with the most important work of this paper. In Section. 3 , we discuss the recogni- tion task of deep learning in IR, and the process of the identification task can be divided into non-end-to-end and end-to-end processes. In the non-end-to- end process, we discuss it according to the process, including pre-processing, feature extraction and matching. In the end-to-end process, we discuss it in terms of typical and novel neural networks, respectively. U-Net and FCN show their excellent performance on the segmentation task. In addition, the PAD task on IR is also studied. Moreover, the YOLO model achieves good results on the iris localization task. GAN performs well in iris image enhancement. There are some other tasks in which deep learning also plays an important role. All these studies achieved excellent performance. This survey concludes with a summary of some typical challenges of IR and suggests potential directions for IR. They can inspire future research in the field of IR."}]
A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts
T RADITIONAL machine learning methods assume that the training and test data are drawn independently and identically (i.i.d.) from the same distribution [ 1 ]. However, when the test distribution (target) differs from the training distribution (source), we face the problem of distribution shifts . Such a shift poses significant challenges for machine learning systems deployed in the wild, such as images captured by different cameras [ 2 ], road scenes of different cities [ 3 ], and imaging devices in different hospitals [ 4 ]. As a result, the research community has developed a variety of generalization or adaptation techniques to improve model robustness against distribution shifts. For instance, domain generalization (DG) [ 5 ] aims to learn a model using data from one or multiple source domains that can generalize well to any out-of-distribution target domain. On the other hand, domain adaptation (DA) [ 6 ] follows the transductive learning principle to leverage knowledge from a labeled source domain to an unlabeled target domain.
Learning to adapt a pre-trained model to unlabeled data under distribution shifts is an emerging and critical problem in the field of machine learning. This survey provides a comprehensive review of four related topics: source-free domain adaptation, test-time batch adaptation, online test- time adaptation, and test-time prior adaptation. These top- ics are unified as a broad learning paradigm of test-time adaptation. For each topic, we first introduce its history and definition, followed by a new taxonomy of advanced algorithms. Additionally, we provide a review of applica- tions related to test-time adaptation, as well as an outlook of emerging research trends and open problems. We believe that this survey will assist both newcomers and experienced researchers in better understanding the current state of research in test-time adaptation under distribution shifts.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts* and the introduction:T RADITIONAL machine learning methods assume that the training and test data are drawn independently and identically (i.i.d.) from the same distribution [ 1 ]. However, when the test distribution (target) differs from the training distribution (source), we face the problem of distribution shifts . Such a shift poses significant challenges for machine learning systems deployed in the wild, such as images captured by different cameras [ 2 ], road scenes of different cities [ 3 ], and imaging devices in different hospitals [ 4 ]. As a result, the research community has developed a variety of generalization or adaptation techniques to improve model robustness against distribution shifts. For instance, domain generalization (DG) [ 5 ] aims to learn a model using data from one or multiple source domains that can generalize well to any out-of-distribution target domain. On the other hand, domain adaptation (DA) [ 6 ] follows the transductive learning principle to leverage knowledge from a labeled source domain to an unlabeled target domain."},{"role":"assistant","content":"Conclusion: Learning to adapt a pre-trained model to unlabeled data under distribution shifts is an emerging and critical problem in the field of machine learning. This survey provides a comprehensive review of four related topics: source-free domain adaptation, test-time batch adaptation, online test- time adaptation, and test-time prior adaptation. These top- ics are unified as a broad learning paradigm of test-time adaptation. For each topic, we first introduce its history and definition, followed by a new taxonomy of advanced algorithms. Additionally, we provide a review of applica- tions related to test-time adaptation, as well as an outlook of emerging research trends and open problems. We believe that this survey will assist both newcomers and experienced researchers in better understanding the current state of research in test-time adaptation under distribution shifts."}]
A Survey on Medical Document Summarization
The internet has become a global phenomenon, connecting people all over the world and allowing for the exchange of information on a scale that was previously unimaginable. The rise of the internet and the corresponding digitization of many aspects of daily life has had a profound impact on society leading to information overload [ 18 ]. The sheer amount of information available today can be overwhelming. To combat this, individuals can use summarization techniques to distill the information down to its most essential points. The internet also had a profound impact on medical science. With the proliferation of online health tools, it is now easier than ever before to access medical information and resources [ 88 ]. For example, individuals can easily search for medical information, research medical conditions and treatments, and find healthcare providers. Additionally, social media platforms have provided a platform for medical professionals to collabo- rate, share information, and discuss current medical topics. This has allowed medical professionals to quickly access the latest research, treatments, and developments in the field. Furthermore, online tools and platforms have enabled medical professionals to perform remote consultations with patients, providing more efficient and convenient healthcare services. There are a few reasons why summarization is important for medical documents. First, it allows for a quick overview of the document’s content. This can be useful when trying to determine if the document is relevant to a particular topic of interest. Second, summarization can help to identify key points or ideas within a document. This can be valuable when trying to understand the main arguments or findings of a study. Finally, summarization can help to improve the readability of a document by reducing the amount of text that needs to be read. This application of summarization systems has the potential to reduce the burdens from medical workers who already are overburdened [94].
The internet has drastically changed the way medical documents are created and accessed. In the past, they were often handwritten which made them hard to share and find. Now, they are typically created electronically which makes them much easier to both access and share. The internet has also allowed medical professionals to easily share documents with one another which has ultimately improved patient care and medical research. This paper provides a survey to introduce users and researchers to the techniques and current trends in the Medical Summarization task. We cover the formal definition of the Medical Summarization task, a detailed analysis of different medical tasks based on the type of medical documents and specific datasets and challenges associated with them, a detailed categorization of existing works based on input, output, and technique, and an in-depth look at the evaluation metrics utilized to measure the quality of the summaries. To finish, we suggest some potential future directions for further research. We are confident that this survey will encourage more work in medical document summarization.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Medical Document Summarization* and the introduction:The internet has become a global phenomenon, connecting people all over the world and allowing for the exchange of information on a scale that was previously unimaginable. The rise of the internet and the corresponding digitization of many aspects of daily life has had a profound impact on society leading to information overload [ 18 ]. The sheer amount of information available today can be overwhelming. To combat this, individuals can use summarization techniques to distill the information down to its most essential points. The internet also had a profound impact on medical science. With the proliferation of online health tools, it is now easier than ever before to access medical information and resources [ 88 ]. For example, individuals can easily search for medical information, research medical conditions and treatments, and find healthcare providers. Additionally, social media platforms have provided a platform for medical professionals to collabo- rate, share information, and discuss current medical topics. This has allowed medical professionals to quickly access the latest research, treatments, and developments in the field. Furthermore, online tools and platforms have enabled medical professionals to perform remote consultations with patients, providing more efficient and convenient healthcare services. There are a few reasons why summarization is important for medical documents. First, it allows for a quick overview of the document’s content. This can be useful when trying to determine if the document is relevant to a particular topic of interest. Second, summarization can help to identify key points or ideas within a document. This can be valuable when trying to understand the main arguments or findings of a study. Finally, summarization can help to improve the readability of a document by reducing the amount of text that needs to be read. This application of summarization systems has the potential to reduce the burdens from medical workers who already are overburdened [94]."},{"role":"assistant","content":"Conclusion: The internet has drastically changed the way medical documents are created and accessed. In the past, they were often handwritten which made them hard to share and find. Now, they are typically created electronically which makes them much easier to both access and share. The internet has also allowed medical professionals to easily share documents with one another which has ultimately improved patient care and medical research. This paper provides a survey to introduce users and researchers to the techniques and current trends in the Medical Summarization task. We cover the formal definition of the Medical Summarization task, a detailed analysis of different medical tasks based on the type of medical documents and specific datasets and challenges associated with them, a detailed categorization of existing works based on input, output, and technique, and an in-depth look at the evaluation metrics utilized to measure the quality of the summaries. To finish, we suggest some potential future directions for further research. We are confident that this survey will encourage more work in medical document summarization."}]
MAC Protocols for Wireless Mesh Networks with Multi-beam Antennas: A Survey ⋆
Due to a growing popularity of wireless local access, there exists a high demand to improve network throughput and enhance energy efficiency in data transmission between terminal devices (e.g., mobile phones) and access points (or base stations). However, wireless local networks mainly focus on the single-hop transmission. With the help of explosive implemen- tations of wireless network in practical, it has sparked the idea of Wireless Mesh Networks (WMN) [1], which can potentially improve the overall network capacity, enlarge the network coverage, and facilitate the network’s auto-configuration. WMN networks typically ogrnaize the communication nodes in a mesh topology, which is similar to a wireless ad-hoc net- work. In a WMN, it typically has three components: terminals, routers (or switches) and gateways [2]. Compared to traditional antennas, multi-beam antennas, alternatively called smart antennas, provide several advantages, e.g., a higher antenna gain, longer transmission range, better spatial reuse, and much lower interference [3]. Thus, introducing multi-beam antennas into wireless LAN can improve the overall performance of a network, especially for the wireless mesh networks. For example, WMN and its applications can be largely used in a harsh working condition or a disaster relief environment to provide special ser- vices [4]. The advantages of multi-beam antenna on WMN have attracted the researchers from both the academy and the industry, which result in rapid commercialization with lots of standardization efforts [5].
In this paper, we presented a comprehensive survey of MAC protocols in the wireless mesh network using multi-beam antennas. Theoretically, the capacity of WLAN can be consider- ably boosted by the use of multi-beam smart antennas. However, if the designers directly apply IEEE 802.11 to a WLAN with multi-beam antennas, it will inevitably encounter many challenges. The existing solutions to these challenges are based on DCF and hence are not suitable for multi-media applications. The design principles of MAC protocols need to ex- ploit the benefits of multi-beam antennas and overcome the beamforming-related challenges.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*MAC Protocols for Wireless Mesh Networks with Multi-beam Antennas: A Survey ⋆* and the introduction:Due to a growing popularity of wireless local access, there exists a high demand to improve network throughput and enhance energy efficiency in data transmission between terminal devices (e.g., mobile phones) and access points (or base stations). However, wireless local networks mainly focus on the single-hop transmission. With the help of explosive implemen- tations of wireless network in practical, it has sparked the idea of Wireless Mesh Networks (WMN) [1], which can potentially improve the overall network capacity, enlarge the network coverage, and facilitate the network’s auto-configuration. WMN networks typically ogrnaize the communication nodes in a mesh topology, which is similar to a wireless ad-hoc net- work. In a WMN, it typically has three components: terminals, routers (or switches) and gateways [2]. Compared to traditional antennas, multi-beam antennas, alternatively called smart antennas, provide several advantages, e.g., a higher antenna gain, longer transmission range, better spatial reuse, and much lower interference [3]. Thus, introducing multi-beam antennas into wireless LAN can improve the overall performance of a network, especially for the wireless mesh networks. For example, WMN and its applications can be largely used in a harsh working condition or a disaster relief environment to provide special ser- vices [4]. The advantages of multi-beam antenna on WMN have attracted the researchers from both the academy and the industry, which result in rapid commercialization with lots of standardization efforts [5]."},{"role":"assistant","content":"Conclusion: In this paper, we presented a comprehensive survey of MAC protocols in the wireless mesh network using multi-beam antennas. Theoretically, the capacity of WLAN can be consider- ably boosted by the use of multi-beam smart antennas. However, if the designers directly apply IEEE 802.11 to a WLAN with multi-beam antennas, it will inevitably encounter many challenges. The existing solutions to these challenges are based on DCF and hence are not suitable for multi-media applications. The design principles of MAC protocols need to ex- ploit the benefits of multi-beam antennas and overcome the beamforming-related challenges."}]
Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review
Cancer is a disease marked by a disordered growth of abnormal cells that may lead to death if not treated. Around 1.9 million people in the US are expected to be diagnosed with cancer in 2023. Cancer is the second most common reason for death in the US, and it is expected to cause 1,670 deaths per day in the US in 2023 [1]. However, with advances in oncology research, it is estimated that nearly 42% of newly diagnosed cases can be potentially avoided. Being a complex disease, the development and growth of cancer involve multiple mi- croscopic and macroscopic changes in the cell morphology, which are not yet fully understood. In recent years, increasing interest has been in using machine learning techniques, such as deep neural networks (DNNs), to assist with cancer diagnosis and treatment.
Research efforts in integrating data across a few modalities have already shown encouraging results. There is no unified framework available for scaling across all possible modalities of cancer. The convergence of individual methodologies and data across varying scales may hold vital clues in creating a unified view of the disease that is more prognostic, predictive, and insightful than the in- dividual view or modality. Efforts to beat cancer require synergistic analysis of heterogeneous data and instantiating scalable models. In this survey, we re- viewed the multimodal learning task on oncology data. The future resides in developing a deployment-ready, scalable deep learning framework with inher- ent uncertainty quantification, interpret ability, and general iz ability to integrate oncology data across multiple scales, modalities, and resolutions to accelerate cancer diagnosis, prognosis, therapeutic response, and treatment planning.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review* and the introduction:Cancer is a disease marked by a disordered growth of abnormal cells that may lead to death if not treated. Around 1.9 million people in the US are expected to be diagnosed with cancer in 2023. Cancer is the second most common reason for death in the US, and it is expected to cause 1,670 deaths per day in the US in 2023 [1]. However, with advances in oncology research, it is estimated that nearly 42% of newly diagnosed cases can be potentially avoided. Being a complex disease, the development and growth of cancer involve multiple mi- croscopic and macroscopic changes in the cell morphology, which are not yet fully understood. In recent years, increasing interest has been in using machine learning techniques, such as deep neural networks (DNNs), to assist with cancer diagnosis and treatment."},{"role":"assistant","content":"Conclusion: Research efforts in integrating data across a few modalities have already shown encouraging results. There is no unified framework available for scaling across all possible modalities of cancer. The convergence of individual methodologies and data across varying scales may hold vital clues in creating a unified view of the disease that is more prognostic, predictive, and insightful than the in- dividual view or modality. Efforts to beat cancer require synergistic analysis of heterogeneous data and instantiating scalable models. In this survey, we re- viewed the multimodal learning task on oncology data. The future resides in developing a deployment-ready, scalable deep learning framework with inher- ent uncertainty quantification, interpret ability, and general iz ability to integrate oncology data across multiple scales, modalities, and resolutions to accelerate cancer diagnosis, prognosis, therapeutic response, and treatment planning."}]
Vision-Centric BEV Perception: A Survey
A C curate and comprehensive understanding of surround- ing scenes, including dynamic objects and static streets, is essential for autonomous vehicles to make safe and effective driving decisions. 3D perception conducted in bird’s eye-view (BEV) has attracted immense attention in recent years for two main reasons. First, BEV representations of the world, especially traffic scenarios, contains rich semantic information, precise localization, and absolute scales, which can be directly deployed by many downstream real-world applications such as behavior prediction, motion planning, etc. Second, BEV provides a physics-interpretable way to fuse information from different views, modalities, time series, and agents. Because it represents the world in a coordinate system, multiple views of data captured from surrounding cameras can be fused into a whole scene in BEV without extra stitching operations in overlapping areas. Meanwhile, the temporal fusion of consecutive visual data is exact and natural without any distortion existing in the perspective view. Moreover, other widely used acquisition sensors, like LiDAR and Radar, capture data in 3D space, which can been easily transformed to BEV, and conduct sensor fusion with cameras. Even for Vehicle-to-Vehicle or Vehicle-Infrastructure communication technologies, BEV also plays an important role in fusing various information from multiple sources.
This paper presents a comprehensive review of recent research on solving the view transformation between per- spective view and bird’s eye view. We introduce and discuss related methods by clearly classifying them according to the core idea and downstream vision tasks. To facilitate further research and implementation, detailed comparison and anal- ysis of performance and application scenarios are provided, and rich extensions of vision-centric BEV perception are also proposed.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Vision-Centric BEV Perception: A Survey* and the introduction:A C curate and comprehensive understanding of surround- ing scenes, including dynamic objects and static streets, is essential for autonomous vehicles to make safe and effective driving decisions. 3D perception conducted in bird’s eye-view (BEV) has attracted immense attention in recent years for two main reasons. First, BEV representations of the world, especially traffic scenarios, contains rich semantic information, precise localization, and absolute scales, which can be directly deployed by many downstream real-world applications such as behavior prediction, motion planning, etc. Second, BEV provides a physics-interpretable way to fuse information from different views, modalities, time series, and agents. Because it represents the world in a coordinate system, multiple views of data captured from surrounding cameras can be fused into a whole scene in BEV without extra stitching operations in overlapping areas. Meanwhile, the temporal fusion of consecutive visual data is exact and natural without any distortion existing in the perspective view. Moreover, other widely used acquisition sensors, like LiDAR and Radar, capture data in 3D space, which can been easily transformed to BEV, and conduct sensor fusion with cameras. Even for Vehicle-to-Vehicle or Vehicle-Infrastructure communication technologies, BEV also plays an important role in fusing various information from multiple sources."},{"role":"assistant","content":"Conclusion: This paper presents a comprehensive review of recent research on solving the view transformation between per- spective view and bird’s eye view. We introduce and discuss related methods by clearly classifying them according to the core idea and downstream vision tasks. To facilitate further research and implementation, detailed comparison and anal- ysis of performance and application scenarios are provided, and rich extensions of vision-centric BEV perception are also proposed."}]
Detection of Dangerous Events on Social Media: A Perspective Review
The extent of social media users consists of billions of people from all around the world. Initially, social media was developed with the object in mind to help connect with family and friends online. It was used to share everyday things, events, interests, and news within closed circles of family and friends. These were personal events, e.g. birthdays, weddings, vacations, graduation ceremonies, and going out. After the usability of social media was discovered, it soon caught the attention of individuals and companies that started using social media to reach more customers. Soon after, the trend became a global phenomenon where people started connecting worldwide based on their com- mon interests. The influence of social media on people’s lives and attitudes has been widely studied and established in many different perspectives [ 1 ], [ 2 ].
In this work, we laid the basis of the term “Dangerous Events” and explored different existing techniques and methods for detecting events on social media. Dangerous events contain a broad meaning, but we keep it essential to define the term better. We believe much more can be included in dangerous events, as we explored in the discussion section. Categorizing dangerous events into sub-categories can help specify the event and its features. The subcategories consist of scenario-based, sentiment-based and action-based dangerous events. The usefulness of social media these days provides a significant advantage in detecting such events initially. While in some cases, significant events also originate from social media and manifest in real life, such as; mass protests, communal violence and radicalization.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Detection of Dangerous Events on Social Media: A Perspective Review* and the introduction:The extent of social media users consists of billions of people from all around the world. Initially, social media was developed with the object in mind to help connect with family and friends online. It was used to share everyday things, events, interests, and news within closed circles of family and friends. These were personal events, e.g. birthdays, weddings, vacations, graduation ceremonies, and going out. After the usability of social media was discovered, it soon caught the attention of individuals and companies that started using social media to reach more customers. Soon after, the trend became a global phenomenon where people started connecting worldwide based on their com- mon interests. The influence of social media on people’s lives and attitudes has been widely studied and established in many different perspectives [ 1 ], [ 2 ]."},{"role":"assistant","content":"Conclusion: In this work, we laid the basis of the term “Dangerous Events” and explored different existing techniques and methods for detecting events on social media. Dangerous events contain a broad meaning, but we keep it essential to define the term better. We believe much more can be included in dangerous events, as we explored in the discussion section. Categorizing dangerous events into sub-categories can help specify the event and its features. The subcategories consist of scenario-based, sentiment-based and action-based dangerous events. The usefulness of social media these days provides a significant advantage in detecting such events initially. While in some cases, significant events also originate from social media and manifest in real life, such as; mass protests, communal violence and radicalization."}]
Temporal Link Prediction: A Unified Framework, Taxonomy, and Review
For various complex systems (e.g., social networks, biology networks, and communication networks), graphs provide a generic abstraction to describe system entities and their relationships. For instance, one can abstract each entity as a node (vertex) and represent the relationship between a pair of entities as an edge (link) between the corresponding node pair. Each edge can be associated with a weight to encode additional information about the interactions between system entities (e.g., trust rating between users [ 41 , 42 ] and traffic between telecommunication devices [ 8 , 77 ]).
In this survey, we comprehensively reviewed existing representative TLP methods. We first gave the formal definitions regarding (i) data models of dynamic graphs (i.e., evenly-spaced snapshot sequence and unevenly-spaced edge sequence descriptions), (ii) task settings of TLP (i.e., level-1 and -2), and (iii) learning paradigms of related research (i.e., DI, OTI, and OTOG). Based on these definitions, we further introduced a fine-grained hierarchical taxonomy that categorizes existing methods in terms of (i) data models, (ii) learning paradigms, and (iii) techniques, covering multiple aspects. From a generic framework, a unified encode-decoder framework was proposed to formulate all the methods reviewed. In particular, each method can be described by an encoder, a decoder, and a loss function, where different methods only differ in terms of these components. Based on the unified framework, we also refactored or implemented some representative TLP approaches and served the community with an open-source project OpenTLP , which would be constantly updated. Finally, we summarized some advanced topics in recent research and future research directions.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Temporal Link Prediction: A Unified Framework, Taxonomy, and Review* and the introduction:For various complex systems (e.g., social networks, biology networks, and communication networks), graphs provide a generic abstraction to describe system entities and their relationships. For instance, one can abstract each entity as a node (vertex) and represent the relationship between a pair of entities as an edge (link) between the corresponding node pair. Each edge can be associated with a weight to encode additional information about the interactions between system entities (e.g., trust rating between users [ 41 , 42 ] and traffic between telecommunication devices [ 8 , 77 ])."},{"role":"assistant","content":"Conclusion: In this survey, we comprehensively reviewed existing representative TLP methods. We first gave the formal definitions regarding (i) data models of dynamic graphs (i.e., evenly-spaced snapshot sequence and unevenly-spaced edge sequence descriptions), (ii) task settings of TLP (i.e., level-1 and -2), and (iii) learning paradigms of related research (i.e., DI, OTI, and OTOG). Based on these definitions, we further introduced a fine-grained hierarchical taxonomy that categorizes existing methods in terms of (i) data models, (ii) learning paradigms, and (iii) techniques, covering multiple aspects. From a generic framework, a unified encode-decoder framework was proposed to formulate all the methods reviewed. In particular, each method can be described by an encoder, a decoder, and a loss function, where different methods only differ in terms of these components. Based on the unified framework, we also refactored or implemented some representative TLP approaches and served the community with an open-source project OpenTLP , which would be constantly updated. Finally, we summarized some advanced topics in recent research and future research directions."}]
Deep Learning for Omnidirectional Vision: A Survey and New Perspectives
W ITH the rapid development of 3D technology and the pursuit of realistic visual experience, research interest in computer vision has gradually shifted from traditional 2D planar image data to omnidirectional image (ODI) data, also known as the 360 ˝ image, panoramic image, or spherical image data. ODI data captured by the 360 ˝ cameras yields a 360 ˝ ˆ 180 ˝ field-of-view (FoV), which is much wider than the pinhole cameras; therefore, it can capture the entire surrounding environment by reflecting richer spatial information than the conventional planar images. Due to the immersive experience and complete view, ODI data has been widely applied to numerous applications, e . g ., augmented reality(AR) { virtual reality (VR), autonomous driving, and robot navigation. In general, raw ODI data is represented as, e . g ., the equirectangular projection (ERP) or cubemap projection (CP) to be consistent with the imaging pipelines [ 1 ], [ 2 ]. As a novel data domain, ODI data has both domain-unique advantages (wide FoV of spherical imaging, rich geometric information, multiple projection types) and challenges (severe distortion in the ERP type, content discontinuities in the CP format). This renders the research on omnidirectional vision valuable yet challenging.
In this survey, we comprehensively reviewed and analyzed the recent progress of DL methods for omnidirectional vision. We first introduced the principle of omnidirectional imaging, con- volution methods and datasets. We then provided a hierarchical and structural taxonomy of the DL methods. For each task in the taxonomy, we summarized the current research status and pointed out the opportunities and challenges. We further provided a review of the novel learning strategies and applications. After constructing connections among current approaches, we discussed the pivotal problems to be solved and indicated promising future research directions. We hope this work can provide some insights for researchers and promote progress in the community.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Deep Learning for Omnidirectional Vision: A Survey and New Perspectives* and the introduction:W ITH the rapid development of 3D technology and the pursuit of realistic visual experience, research interest in computer vision has gradually shifted from traditional 2D planar image data to omnidirectional image (ODI) data, also known as the 360 ˝ image, panoramic image, or spherical image data. ODI data captured by the 360 ˝ cameras yields a 360 ˝ ˆ 180 ˝ field-of-view (FoV), which is much wider than the pinhole cameras; therefore, it can capture the entire surrounding environment by reflecting richer spatial information than the conventional planar images. Due to the immersive experience and complete view, ODI data has been widely applied to numerous applications, e . g ., augmented reality(AR) { virtual reality (VR), autonomous driving, and robot navigation. In general, raw ODI data is represented as, e . g ., the equirectangular projection (ERP) or cubemap projection (CP) to be consistent with the imaging pipelines [ 1 ], [ 2 ]. As a novel data domain, ODI data has both domain-unique advantages (wide FoV of spherical imaging, rich geometric information, multiple projection types) and challenges (severe distortion in the ERP type, content discontinuities in the CP format). This renders the research on omnidirectional vision valuable yet challenging."},{"role":"assistant","content":"Conclusion: In this survey, we comprehensively reviewed and analyzed the recent progress of DL methods for omnidirectional vision. We first introduced the principle of omnidirectional imaging, con- volution methods and datasets. We then provided a hierarchical and structural taxonomy of the DL methods. For each task in the taxonomy, we summarized the current research status and pointed out the opportunities and challenges. We further provided a review of the novel learning strategies and applications. After constructing connections among current approaches, we discussed the pivotal problems to be solved and indicated promising future research directions. We hope this work can provide some insights for researchers and promote progress in the community."}]
How to Evaluate Your Dialogue Models: A Review of Approaches
Automatic evaluation is a non-trivial and challenging task in many sub-fields of natural language processing, e.g. text summarization [33], machine trans- lation [44,34] and dialogue system [37]. Especially, the evaluation of dialogue system often poorly correlates with human judgement. This mismatch is a key bottleneck in migrating dialogue systems developed off-line for application in the real world [37].
In this paper, we summarize the evaluation methods of dialogue systems and a method-based taxonomy of the existing evaluation metric is attempted here. Even though much work has done to construct an effective evaluation method, it is still a challenging and understudied work to capture all aspects of dialogue response from naturalness and coherence to long-term engagement and flow. Finally, we believe that the redefinition of user simulator is the most promising direction to build an ideal evaluator.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*How to Evaluate Your Dialogue Models: A Review of Approaches* and the introduction:Automatic evaluation is a non-trivial and challenging task in many sub-fields of natural language processing, e.g. text summarization [33], machine trans- lation [44,34] and dialogue system [37]. Especially, the evaluation of dialogue system often poorly correlates with human judgement. This mismatch is a key bottleneck in migrating dialogue systems developed off-line for application in the real world [37]."},{"role":"assistant","content":"Conclusion: In this paper, we summarize the evaluation methods of dialogue systems and a method-based taxonomy of the existing evaluation metric is attempted here. Even though much work has done to construct an effective evaluation method, it is still a challenging and understudied work to capture all aspects of dialogue response from naturalness and coherence to long-term engagement and flow. Finally, we believe that the redefinition of user simulator is the most promising direction to build an ideal evaluator."}]
Stochastic Process Semantics for Dynamical Grammar Syntax: An Overview
Probabilistic models of application domains are central to pattern recognition, machine learning, and scientific mod- eling in various fields. Consequently, unifying frameworks are likely to be fruitful for one or more of these fields. There are also more technical motivations for pursuing the unification of diverse model types. In multiscale modeling, models of the same system at different scales can have fundamentally different characteristics (e.g. deterministic vs. stochastic) and yet must be placed in a single modeling framework. In machine learning, automated search over a wide variety of model types may be of great advantage. In this paper we propose Stochastic Parameterized Grammars (SPG’s) and their generalization to Dynamical Grammars (DG’s) as such a unifying framework. To this end we define mathematically both the syntax and the semantics of this formal modeling language.
We have established a syntax and semantics for a probabilistic modeling language based on independent processes leading to events linked by a shared set of objects. The semantics is based on a polynomial ring of time-evolution operators. The syntax is in the form of a set of rewrite rules. Stochastic Parameterized Grammars expressed in this language can compactly encode disparate models: generative cluster data models, biochemical networks, logic programs, graph grammars, string rewrite grammars, and stochastic differential equations among other others. The time-ordered product expansion connects this framework to powerful methods from quantum field theory and operator algebra.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Stochastic Process Semantics for Dynamical Grammar Syntax: An Overview* and the introduction:Probabilistic models of application domains are central to pattern recognition, machine learning, and scientific mod- eling in various fields. Consequently, unifying frameworks are likely to be fruitful for one or more of these fields. There are also more technical motivations for pursuing the unification of diverse model types. In multiscale modeling, models of the same system at different scales can have fundamentally different characteristics (e.g. deterministic vs. stochastic) and yet must be placed in a single modeling framework. In machine learning, automated search over a wide variety of model types may be of great advantage. In this paper we propose Stochastic Parameterized Grammars (SPG’s) and their generalization to Dynamical Grammars (DG’s) as such a unifying framework. To this end we define mathematically both the syntax and the semantics of this formal modeling language."},{"role":"assistant","content":"Conclusion: We have established a syntax and semantics for a probabilistic modeling language based on independent processes leading to events linked by a shared set of objects. The semantics is based on a polynomial ring of time-evolution operators. The syntax is in the form of a set of rewrite rules. Stochastic Parameterized Grammars expressed in this language can compactly encode disparate models: generative cluster data models, biochemical networks, logic programs, graph grammars, string rewrite grammars, and stochastic differential equations among other others. The time-ordered product expansion connects this framework to powerful methods from quantum field theory and operator algebra."}]
A S URVEY FOR R EAL -T IME N ETWORK P ERFORMANCE M EASUREMENT VIA M ACHINE L EARNING
# 1.1 Motivation
The real-time network is required for next-generation communication. To achieve the stringent goals of real-time network, network operators rely on an efficient, reliable, flexible, and globally network performance measurement, which helps to assist real-time networks in providing these services promptly. The complex measurement becomes a resource intensive mission while the network size increasing. ML can help to form the basis for network performance measurement. Moreover, by applying ML in network performance measurement, efficiency and resiliency can potentially be improved.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A S URVEY FOR R EAL -T IME N ETWORK P ERFORMANCE M EASUREMENT VIA M ACHINE L EARNING* and the introduction:# 1.1 Motivation"},{"role":"assistant","content":"Conclusion: The real-time network is required for next-generation communication. To achieve the stringent goals of real-time network, network operators rely on an efficient, reliable, flexible, and globally network performance measurement, which helps to assist real-time networks in providing these services promptly. The complex measurement becomes a resource intensive mission while the network size increasing. ML can help to form the basis for network performance measurement. Moreover, by applying ML in network performance measurement, efficiency and resiliency can potentially be improved."}]
All One Needs to Know about Priors for Deep Image Restoration and Enhancement: A Survey
I MAGE quality often deteriorates during capture, storage, trans- mission, and rendering. Typical degradations include blur, res- olution degradation, noise, and other artifacts. Image restoration and enhancement is a process that attempts to improve the image quality by removing these degradations while preserving the indispensable image characteristics, as shown in Fig. 1 .
In this paper, we offered an insightful and methodical analysis of the recent advances of prior in deep image restoration and en- hancement. We conducted a hierarchical and structural taxonomy of prior commonly used in the DL-based methods. Meanwhile, we provided an insightful discussion on each prior regarding its principle, potential, and applications. Moreover, we summarized the crucial problems by highlighting the potential future directions to spark more research in the community.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*All One Needs to Know about Priors for Deep Image Restoration and Enhancement: A Survey* and the introduction:I MAGE quality often deteriorates during capture, storage, trans- mission, and rendering. Typical degradations include blur, res- olution degradation, noise, and other artifacts. Image restoration and enhancement is a process that attempts to improve the image quality by removing these degradations while preserving the indispensable image characteristics, as shown in Fig. 1 ."},{"role":"assistant","content":"Conclusion: In this paper, we offered an insightful and methodical analysis of the recent advances of prior in deep image restoration and en- hancement. We conducted a hierarchical and structural taxonomy of prior commonly used in the DL-based methods. Meanwhile, we provided an insightful discussion on each prior regarding its principle, potential, and applications. Moreover, we summarized the crucial problems by highlighting the potential future directions to spark more research in the community."}]
A Systematic Literature Review on the security of Unmanned Aerial Vehicle System
The Unmanned Aerial Vehicle is an acronym for UAV, an air vehicle without any onboard pilot. The aircraft is controlled remotely by a controller which is on the ground or the UAV can fly autonomously with the complex pre-programmed model [ 1 ]. It is also called a Remotely Piloted Vehicle (RPV) when pilotless aircraft are operated from the ground via electronically controlled equipment. UAVs are used for ob- servatory and tactic planning. UAVs are capable of main- taining the flight level which is powered by jet propellers or reciprocating engines. UAVs can be categorized into use for government, commercial and hobby purposes. [ 2 ] There are a plethora of functional uses for UAVs like military training by providing both sky and ground coverage of arms with a specific target and also helping in simulating enemies’ air vehicles, reconnaissance, combating, for logis- tics: delivering cargo, improving technologies with research and development in UAVs and commercial/civil purpose of taking agriculture information and aerial photography. Unmanned Aerial vehicles have a great use to a certain level easing the work of humankind with economical financial cost [ 3 ].
This study has uncovered previous studies that examine how Unmanned Aerial Vehicle solutions might exacerbate cyber security issues. This study examined a UAV system’s overall security hazard. The thorough threat analysis of the system is intended to assist both the system’s designers and users in recognizing potential vulnerabilities and putting appropriate mitigation and recovery mechanisms in place. It is still difficult to pinpoint which risks could have the greatest impact on UAV systems because the majority of the information about the current safeguards is classified. Working on a few of these dangers and utilizing mission data to better precisely simulate them will be part of future development.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Systematic Literature Review on the security of Unmanned Aerial Vehicle System* and the introduction:The Unmanned Aerial Vehicle is an acronym for UAV, an air vehicle without any onboard pilot. The aircraft is controlled remotely by a controller which is on the ground or the UAV can fly autonomously with the complex pre-programmed model [ 1 ]. It is also called a Remotely Piloted Vehicle (RPV) when pilotless aircraft are operated from the ground via electronically controlled equipment. UAVs are used for ob- servatory and tactic planning. UAVs are capable of main- taining the flight level which is powered by jet propellers or reciprocating engines. UAVs can be categorized into use for government, commercial and hobby purposes. [ 2 ] There are a plethora of functional uses for UAVs like military training by providing both sky and ground coverage of arms with a specific target and also helping in simulating enemies’ air vehicles, reconnaissance, combating, for logis- tics: delivering cargo, improving technologies with research and development in UAVs and commercial/civil purpose of taking agriculture information and aerial photography. Unmanned Aerial vehicles have a great use to a certain level easing the work of humankind with economical financial cost [ 3 ]."},{"role":"assistant","content":"Conclusion: This study has uncovered previous studies that examine how Unmanned Aerial Vehicle solutions might exacerbate cyber security issues. This study examined a UAV system’s overall security hazard. The thorough threat analysis of the system is intended to assist both the system’s designers and users in recognizing potential vulnerabilities and putting appropriate mitigation and recovery mechanisms in place. It is still difficult to pinpoint which risks could have the greatest impact on UAV systems because the majority of the information about the current safeguards is classified. Working on a few of these dangers and utilizing mission data to better precisely simulate them will be part of future development."}]
Orchestrating Development Lifecycle of Machine Learning Based IoT Applications: A Survey
Rapid development of hardware, software and communication technologies boosts the speed of connecting the physical world to the Internet via Internet of Things (IoT). A report 1 shows that about 75.44 billion IoT devices will be connected to the Internet by 2025. These massive devices generate a large amount of data with various modalities. Processing and analyzing such big data is essential for developing smart IoT applications. Machine Learning (ML) plays a vital role in data intelligence which aims to understand and explore the real world. ML + IoT type applications thus are experiencing explosive growth. However, there are unfilled gaps between current solutions and demands of orchestrating development lifecycle of ML-based IoT applications. Exiting orchestration frameworks for example Ubuntu Juju , Puppet and Chef are flexible in providing solutions for deploying and running applications over public or private clouds. These frameworks, however, neglect heterogeneity of IoT environment that encompasses various hardwares, communication protocols and operation systems. More importantly, none of them are able to completely orchestrate holistic development lifecycle of ML-based IoT applications. The development lifecycle must cover the following factors: 1) how the target application is specified and developed? 2) where the target application is deployed? (3) what kind of information of the target application needs to be audited. Application specification defines the requirements including the ML tasks, performance, accuracy and execution workflow. Based on the specification and the available computing resources, the ML models are developed to meet the specified requirements while optimizing the training processes in terms of the cost of time and computing resources. Next, the model deployment considers the difficulty of the heterogeneity of the IoT environment for running a set of composed ML models. Finally, ML-based IoT applications closely connect with people’s lives and some applications such as autopilot require high reliability. Therefore, the accountability aims to collect the essential monitoring information to improve the performance of the application in next iteration of the lifecycle.
Growing numbers of internet-connected things (IoT) produce vast amounts of data, build applications and provide various services in domains such as smart cities, energy, mobility, and smart transportation. ML is becoming a preliminary technique for analyzing IoT data. It produces high-level abstraction and insight that is fed to the IoT systems for fine- tuning and improvement of the services. In this survey, we reviewed the characteristics of IoT development lifecycle and the role of ML for individual steps. In specific, we divided the development lifecycles into different modules and presented a novel taxonomy to characterize and analyze various techniques used to build an ML-based IoT application. In summary, this survey seeks to provide systematic and insightful information for researchers. It assists the development of future orchestration solutions by providing a holistic view on the current status of ML-based IoT application development, deriving key open research issues that were identified based on our critical review.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Orchestrating Development Lifecycle of Machine Learning Based IoT Applications: A Survey* and the introduction:Rapid development of hardware, software and communication technologies boosts the speed of connecting the physical world to the Internet via Internet of Things (IoT). A report 1 shows that about 75.44 billion IoT devices will be connected to the Internet by 2025. These massive devices generate a large amount of data with various modalities. Processing and analyzing such big data is essential for developing smart IoT applications. Machine Learning (ML) plays a vital role in data intelligence which aims to understand and explore the real world. ML + IoT type applications thus are experiencing explosive growth. However, there are unfilled gaps between current solutions and demands of orchestrating development lifecycle of ML-based IoT applications. Exiting orchestration frameworks for example Ubuntu Juju , Puppet and Chef are flexible in providing solutions for deploying and running applications over public or private clouds. These frameworks, however, neglect heterogeneity of IoT environment that encompasses various hardwares, communication protocols and operation systems. More importantly, none of them are able to completely orchestrate holistic development lifecycle of ML-based IoT applications. The development lifecycle must cover the following factors: 1) how the target application is specified and developed? 2) where the target application is deployed? (3) what kind of information of the target application needs to be audited. Application specification defines the requirements including the ML tasks, performance, accuracy and execution workflow. Based on the specification and the available computing resources, the ML models are developed to meet the specified requirements while optimizing the training processes in terms of the cost of time and computing resources. Next, the model deployment considers the difficulty of the heterogeneity of the IoT environment for running a set of composed ML models. Finally, ML-based IoT applications closely connect with people’s lives and some applications such as autopilot require high reliability. Therefore, the accountability aims to collect the essential monitoring information to improve the performance of the application in next iteration of the lifecycle."},{"role":"assistant","content":"Conclusion: Growing numbers of internet-connected things (IoT) produce vast amounts of data, build applications and provide various services in domains such as smart cities, energy, mobility, and smart transportation. ML is becoming a preliminary technique for analyzing IoT data. It produces high-level abstraction and insight that is fed to the IoT systems for fine- tuning and improvement of the services. In this survey, we reviewed the characteristics of IoT development lifecycle and the role of ML for individual steps. In specific, we divided the development lifecycles into different modules and presented a novel taxonomy to characterize and analyze various techniques used to build an ML-based IoT application. In summary, this survey seeks to provide systematic and insightful information for researchers. It assists the development of future orchestration solutions by providing a holistic view on the current status of ML-based IoT application development, deriving key open research issues that were identified based on our critical review."}]
Conversational Agents in Therapeutic Interventions for Neuro developmental Disorders: A Survey
Neuro developmental Disorders (NDD) are a group of conditions that originate during childhood and are characterized by severe deficits in the cognitive and social areas [ 5 ]. NDD is a chronic state, but early and focused interventions are thought to mitigate its effects [ 22 ]. Traditional therapy involves different approaches for assessing NDD symptoms and helping persons with NDD develop solid social skills [ 7 , 50 ], learn receptive and expressive language [ 29 ], improve adaptive behavior, and promote autonomy in self-care and everyday life tasks.
The adoption of conversational agents for people with NDD is emerging as promising to assess and stimulate their skills in therapeutic interventions. This survey of a number of relevant publications in the last decade highlighted i) the large amount and variety of therapeutic goals addressed by the conversational agents used in the therapy of people with NDD – which is not surprising, considering the broad variety of disorders and impairments associated to NDD, and ii) the wide gamut of different methodological approaches adopted in their design and evaluation.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Conversational Agents in Therapeutic Interventions for Neuro developmental Disorders: A Survey* and the introduction:Neuro developmental Disorders (NDD) are a group of conditions that originate during childhood and are characterized by severe deficits in the cognitive and social areas [ 5 ]. NDD is a chronic state, but early and focused interventions are thought to mitigate its effects [ 22 ]. Traditional therapy involves different approaches for assessing NDD symptoms and helping persons with NDD develop solid social skills [ 7 , 50 ], learn receptive and expressive language [ 29 ], improve adaptive behavior, and promote autonomy in self-care and everyday life tasks."},{"role":"assistant","content":"Conclusion: The adoption of conversational agents for people with NDD is emerging as promising to assess and stimulate their skills in therapeutic interventions. This survey of a number of relevant publications in the last decade highlighted i) the large amount and variety of therapeutic goals addressed by the conversational agents used in the therapy of people with NDD – which is not surprising, considering the broad variety of disorders and impairments associated to NDD, and ii) the wide gamut of different methodological approaches adopted in their design and evaluation."}]
Self-Control in Cyberspace: Applying Dual Systems Theory to a Review of Digital Self-Control Tools
Smartphones and laptops give their users access to an aston- ishing range of tasks anywhere, anytime. While this provides innumerable benefits, a growing amount of public discus- sion and research attention focuses on a perhaps unexpected downside [ 7 , 19 , 62 , 102 , 108 , 121 , 187 , 201 ]: Having immense amounts of functionality available instantly and permanently often makes it difficult for users to focus on their current task and avoid being overly distracted by notifications or habitual check-ins [ 48 , 117 , 181 , 185 ]. This challenge is com- pounded by the business models of many tech companies, which incentivise design that nudges people into using ser- vices frequently and extensively in order to optimise adver- tising revenue (cf. the ‘attention economy’ [ 50 , 56 , 75 , 201 ]).
The challenge of designing powerful, always-connected dig- ital devices that support self-control over their use, is impor- tant to address. This paper contributes to such efforts on two levels: (i) by providing the first comprehensive functionality analysis of current apps and browser extensions for digital self-control on the Google Play, Chrome Web, and Apple App stores, and (ii) by applying a well-established model of self-regulation to evaluate their design features and provide a mechanistic understanding of the problem they address.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Self-Control in Cyberspace: Applying Dual Systems Theory to a Review of Digital Self-Control Tools* and the introduction:Smartphones and laptops give their users access to an aston- ishing range of tasks anywhere, anytime. While this provides innumerable benefits, a growing amount of public discus- sion and research attention focuses on a perhaps unexpected downside [ 7 , 19 , 62 , 102 , 108 , 121 , 187 , 201 ]: Having immense amounts of functionality available instantly and permanently often makes it difficult for users to focus on their current task and avoid being overly distracted by notifications or habitual check-ins [ 48 , 117 , 181 , 185 ]. This challenge is com- pounded by the business models of many tech companies, which incentivise design that nudges people into using ser- vices frequently and extensively in order to optimise adver- tising revenue (cf. the ‘attention economy’ [ 50 , 56 , 75 , 201 ])."},{"role":"assistant","content":"Conclusion: The challenge of designing powerful, always-connected dig- ital devices that support self-control over their use, is impor- tant to address. This paper contributes to such efforts on two levels: (i) by providing the first comprehensive functionality analysis of current apps and browser extensions for digital self-control on the Google Play, Chrome Web, and Apple App stores, and (ii) by applying a well-established model of self-regulation to evaluate their design features and provide a mechanistic understanding of the problem they address."}]
Multi-Agent Model Predictive Control: A Survey
Already back in 1978, Sandell et al. [22] surveyed a wide range of alternative methods for decentralized control. They find that a good combination of engi- neering judgment and analysis can be used to define in a reasonable way an ad- hoc control structure for a dynamic system. They conclude that methodologies are needed that present a designer with several good control structure candidates for further consideration.
In this report we have given an overview of recent literature on multi-agent MPC. We have identified common aspects in each of the reviewed papers. This has led to the identification of certain groups and attributes at a rather non-mathematical level. This allows us to identify directions for further research. Although since the survey paper of 1978 by Sandell et al. [22] a significant amount of progress has been made, many issues remain to be investigated. Some of these are:
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Multi-Agent Model Predictive Control: A Survey* and the introduction:Already back in 1978, Sandell et al. [22] surveyed a wide range of alternative methods for decentralized control. They find that a good combination of engi- neering judgment and analysis can be used to define in a reasonable way an ad- hoc control structure for a dynamic system. They conclude that methodologies are needed that present a designer with several good control structure candidates for further consideration."},{"role":"assistant","content":"Conclusion: In this report we have given an overview of recent literature on multi-agent MPC. We have identified common aspects in each of the reviewed papers. This has led to the identification of certain groups and attributes at a rather non-mathematical level. This allows us to identify directions for further research. Although since the survey paper of 1978 by Sandell et al. [22] a significant amount of progress has been made, many issues remain to be investigated. Some of these are:"}]
Deep Learning for Text Attribute Transfer: A Survey
Driven by fast evolving pretraining techniques over large-scale corpora, neural language gener- ation (NLG) has made tremendous achievements and it can now generate various kinds of amazing human-like text on demand, such as stories, songs, press releases, and technical manuals, to name a few ( Keskar et al. , 2019 ; Brown et al. , 2020 ). How- ever, in order to make these NLG models applicable to complicated real-world applications, we need to further grant them the capability to control certain attributes that people may expect the texts to pos- sess, such as style, sentiment, tense, emotion, polit- ical position, etc. Such controllable NLG models have wide applications in text rewriting tools ( Syed et al. , 2020 ), dialogues systems ( Niu and Bansal , 2018 ; Huang et al. , 2018 ), and other natural lan- guage interfaces. The development of these models has given rise to the text attribute transfer task, which aims to convert a source text with an at- tribute (e.g., positive sentiment) to a new text with a different attribute (e.g., negative sentiment). The generated text should meet the following require- ments: (i) maintaining the attribute-independent content as the source text, (ii) conforming to the target attribute, and (iii) still maintaining the lin- guistic fluency.
This article presents the first comprehensive sur- vey on text attribute transfer which aims to change one or more attributes of a text with the other in- formation remained the same. We review recent research efforts and develop classification schemes to organize the existing literature. We start from providing the task formulation, introducing exist- ing datasets used for this task, and then proceed to summarizing the evaluation metrics including both human and automatic evaluation. Additionally we introduce the methods developed for this task by categorizing them into four main types: disentan- glement based, back-translation based, prototype based, and reinforcement learning based. We end up with discussing several existing issues for this task. We attempt to provide a good reference for researchers to gain insight of the challenges, meth- ods, and issues in this research topic and shed light on future directions.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Deep Learning for Text Attribute Transfer: A Survey* and the introduction:Driven by fast evolving pretraining techniques over large-scale corpora, neural language gener- ation (NLG) has made tremendous achievements and it can now generate various kinds of amazing human-like text on demand, such as stories, songs, press releases, and technical manuals, to name a few ( Keskar et al. , 2019 ; Brown et al. , 2020 ). How- ever, in order to make these NLG models applicable to complicated real-world applications, we need to further grant them the capability to control certain attributes that people may expect the texts to pos- sess, such as style, sentiment, tense, emotion, polit- ical position, etc. Such controllable NLG models have wide applications in text rewriting tools ( Syed et al. , 2020 ), dialogues systems ( Niu and Bansal , 2018 ; Huang et al. , 2018 ), and other natural lan- guage interfaces. The development of these models has given rise to the text attribute transfer task, which aims to convert a source text with an at- tribute (e.g., positive sentiment) to a new text with a different attribute (e.g., negative sentiment). The generated text should meet the following require- ments: (i) maintaining the attribute-independent content as the source text, (ii) conforming to the target attribute, and (iii) still maintaining the lin- guistic fluency."},{"role":"assistant","content":"Conclusion: This article presents the first comprehensive sur- vey on text attribute transfer which aims to change one or more attributes of a text with the other in- formation remained the same. We review recent research efforts and develop classification schemes to organize the existing literature. We start from providing the task formulation, introducing exist- ing datasets used for this task, and then proceed to summarizing the evaluation metrics including both human and automatic evaluation. Additionally we introduce the methods developed for this task by categorizing them into four main types: disentan- glement based, back-translation based, prototype based, and reinforcement learning based. We end up with discussing several existing issues for this task. We attempt to provide a good reference for researchers to gain insight of the challenges, meth- ods, and issues in this research topic and shed light on future directions."}]
A Survey of Data Optimization for Problems in Computer Vision Datasets
Recently, deep learning has achieved impressive progress in computer vision [ 74 , 76 ]. The success of deep learning mainly owes to three points, namely advanced deep network architectures ( e.g., ResNet) [ 46 ], powerful computing devices ( e.g., GPUs), and large datasets with labels ( e.g., ImageNet) [ 26 ]. Among them, deep network architectures and computing devices are well-developed, but it is still very difficult to obtain high-quality training datasets. As the attention of researchers was mainly focused on the development and optimization of deep models and computational devices, instead, once the data was ready, it became a fixed asset and received less attention (as shown in Fig. 1 (a)). With the development of model architecture, the incremental gains from improving models are diminishing in many tasks [ 58 ], while relatively small improvements in data can make AI models much more reliable [ 74 ]. Therefore, more attention should be paid on data development. Furthermore, there is a prevalent assumption that all data points are equivalently relevant to model parameter updating. In other words, all the training data are presented equally and randomly to the model. However, numerous works have challenged this assumption and proven that not all samples are created equal [ 42 , 54 ], and networks should be aware of the various complexities of the data and spend most of the computation on critical examples. With all these concerns, there is a strong need for the community of artificial intelligence (AI) to move from model-level research to data-level research for further improvements in model learning. Ideally, the collected dataset is correct, without any problem, and can be directly used for model learning. However, since there is no uniform standard for the data collection and labeling process, and the labeling is usually left to
Data optimization is a key technique for improving dataset quality and has received increasing attention in machine learning and deep learning. In this paper, for the first time, we summarize and analyze inherent or extrinsic problems faced by large scale datasets. In response to these problems, we review the field of data optimization, and present and compare different data optimization algorithms in detail. Based on our analysis, we provide several potential future research directions. We hope that this survey will provide useful insights for researchers and motivate them to make more progress in the future.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey of Data Optimization for Problems in Computer Vision Datasets* and the introduction:Recently, deep learning has achieved impressive progress in computer vision [ 74 , 76 ]. The success of deep learning mainly owes to three points, namely advanced deep network architectures ( e.g., ResNet) [ 46 ], powerful computing devices ( e.g., GPUs), and large datasets with labels ( e.g., ImageNet) [ 26 ]. Among them, deep network architectures and computing devices are well-developed, but it is still very difficult to obtain high-quality training datasets. As the attention of researchers was mainly focused on the development and optimization of deep models and computational devices, instead, once the data was ready, it became a fixed asset and received less attention (as shown in Fig. 1 (a)). With the development of model architecture, the incremental gains from improving models are diminishing in many tasks [ 58 ], while relatively small improvements in data can make AI models much more reliable [ 74 ]. Therefore, more attention should be paid on data development. Furthermore, there is a prevalent assumption that all data points are equivalently relevant to model parameter updating. In other words, all the training data are presented equally and randomly to the model. However, numerous works have challenged this assumption and proven that not all samples are created equal [ 42 , 54 ], and networks should be aware of the various complexities of the data and spend most of the computation on critical examples. With all these concerns, there is a strong need for the community of artificial intelligence (AI) to move from model-level research to data-level research for further improvements in model learning. Ideally, the collected dataset is correct, without any problem, and can be directly used for model learning. However, since there is no uniform standard for the data collection and labeling process, and the labeling is usually left to"},{"role":"assistant","content":"Conclusion: Data optimization is a key technique for improving dataset quality and has received increasing attention in machine learning and deep learning. In this paper, for the first time, we summarize and analyze inherent or extrinsic problems faced by large scale datasets. In response to these problems, we review the field of data optimization, and present and compare different data optimization algorithms in detail. Based on our analysis, we provide several potential future research directions. We hope that this survey will provide useful insights for researchers and motivate them to make more progress in the future."}]
MATH-AWARE SEARCH ENGINES: PHYSICS APPLICATIONS AND OVERVIEW
What do the AdS/CFT Correspondence, dumb holes, and effective theories have in common?
New research methods are enabled by equation-based search engines. From the simple benefit of looking up a concept by its equation - whether half-remembered or fully - to the advanced application of helping researchers discover new interdisciplinary connections, the possible applications elucidated so far have likely barely scratched the surface.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*MATH-AWARE SEARCH ENGINES: PHYSICS APPLICATIONS AND OVERVIEW* and the introduction:What do the AdS/CFT Correspondence, dumb holes, and effective theories have in common?"},{"role":"assistant","content":"Conclusion: New research methods are enabled by equation-based search engines. From the simple benefit of looking up a concept by its equation - whether half-remembered or fully - to the advanced application of helping researchers discover new interdisciplinary connections, the possible applications elucidated so far have likely barely scratched the surface."}]
Aesthetics, Personalization and Recommendation: A survey on Deep Learning in Fashion
If we go over the past decade and see how deep learning has achieved significant success in many popular Industries and areas. We observe how perception tasks, including visual object recognition and text understanding and speech recognition, have revolutionized different regions. There is no comparison as to how successful deep learning has been. Still, suppose we want to discuss deep learning in the real terms of the fashion industry. In that case, we see a lot of opportunities and research areas that are still available to work on. As we all know, fashion is an ever-evolving industry. There are new trends that are setting in every second that is passing by. Although cloth- ing design is like one of the most creative realms in the Contemporary World [ 21 ], whether it’s because of the considerable creative part of the design process or equivocal information about clothing, the fact remains to be. Internet shopping has also grown incredibly in the last few years, and fashion has created immense opportunities. Exciting applications for image understanding , retrieval and tagging are surfacing, and there are loads of different application areas that they can be used on. For example, text analysis, image analysis, and similarity retrieval can be utilized in fashion. So deep learning is an aspect that we can use to train our computer to perform human- like tasks such as recognizing speech, identifying images or making predictions. For example, the results described in the apparel design and fashion industry allow users to translate the im- age into the text that might as well be interpreted as a description of the garment based on its sketch.
As the advancements in deep learning, CV and AI are getting stronger day by day their usage in the fashion industry has also become a very popular topic. From product personalization or better designing there are multiple ways in which AI and machine learning Technologies are impacting the global fashion industry and they are increasing the investment by Leading fashion brands in these Technologies are a proof of their immense potential. They provide enhanced customer service, Virtual merchandising, smart manufacturing process and improved inventory management and need less Manpower through Automation and provide reduction in returned products which also improves customer satisfaction. And one of the biggest things is personalization, which is pretty much the key of business success and thanks to deep learning Technologies like AI and ML along with business analytics is enabling fashion business to keep track of fashion trends and purchasing behavior of individual customers. So now it may be a trend or it may be a season prediction. You can do anything with these powerful tools and the fashion industry is magnified. And this is a field that has the potential to grow and ever expand, so any future research in this line that will be done would be something that paves way ahead for more jaw dropping phenomenon.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Aesthetics, Personalization and Recommendation: A survey on Deep Learning in Fashion* and the introduction:If we go over the past decade and see how deep learning has achieved significant success in many popular Industries and areas. We observe how perception tasks, including visual object recognition and text understanding and speech recognition, have revolutionized different regions. There is no comparison as to how successful deep learning has been. Still, suppose we want to discuss deep learning in the real terms of the fashion industry. In that case, we see a lot of opportunities and research areas that are still available to work on. As we all know, fashion is an ever-evolving industry. There are new trends that are setting in every second that is passing by. Although cloth- ing design is like one of the most creative realms in the Contemporary World [ 21 ], whether it’s because of the considerable creative part of the design process or equivocal information about clothing, the fact remains to be. Internet shopping has also grown incredibly in the last few years, and fashion has created immense opportunities. Exciting applications for image understanding , retrieval and tagging are surfacing, and there are loads of different application areas that they can be used on. For example, text analysis, image analysis, and similarity retrieval can be utilized in fashion. So deep learning is an aspect that we can use to train our computer to perform human- like tasks such as recognizing speech, identifying images or making predictions. For example, the results described in the apparel design and fashion industry allow users to translate the im- age into the text that might as well be interpreted as a description of the garment based on its sketch."},{"role":"assistant","content":"Conclusion: As the advancements in deep learning, CV and AI are getting stronger day by day their usage in the fashion industry has also become a very popular topic. From product personalization or better designing there are multiple ways in which AI and machine learning Technologies are impacting the global fashion industry and they are increasing the investment by Leading fashion brands in these Technologies are a proof of their immense potential. They provide enhanced customer service, Virtual merchandising, smart manufacturing process and improved inventory management and need less Manpower through Automation and provide reduction in returned products which also improves customer satisfaction. And one of the biggest things is personalization, which is pretty much the key of business success and thanks to deep learning Technologies like AI and ML along with business analytics is enabling fashion business to keep track of fashion trends and purchasing behavior of individual customers. So now it may be a trend or it may be a season prediction. You can do anything with these powerful tools and the fashion industry is magnified. And this is a field that has the potential to grow and ever expand, so any future research in this line that will be done would be something that paves way ahead for more jaw dropping phenomenon."}]
DaRE: A Cross-Domain Recommender System with Domain-aware Feature Extraction and Review Encoder
With the rapid growth of e-commerce, recommender systems have become an obligatory tool for interconnecting customers with rele- vant items. Early schemes suffer from the cold-start problem caused by data insufficiency. To tackle the problems, some of them exploit auxiliary information such as social relations [ 12 ], the trustworthi- ness of reviewers [ 1 ], item images [ 40 ], and textual information. Especially, textual or linguistic information such as reviews are com- monly available, and many text-aided recommendation algorithms have been introduced [6, 9, 10, 36, 45].
In this paper, we propose DaRE, a novel domain adaptation method utilizing review texts from multiple domains for a knowledge trans- fer. Compared to previous approaches, our method is able to capture domain-invariant and domain-specific information of different cat- egories with the aid of domain-aware feature extraction. Moreover, we suggest the use of a review encoder, to better represent the attribute of reviews that the users will generate after purchasing a specific item. Extensive experiments and ablation studies on real datasets confirm the superiority of our method which is indepen- dent of users and items of different domains.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*DaRE: A Cross-Domain Recommender System with Domain-aware Feature Extraction and Review Encoder* and the introduction:With the rapid growth of e-commerce, recommender systems have become an obligatory tool for interconnecting customers with rele- vant items. Early schemes suffer from the cold-start problem caused by data insufficiency. To tackle the problems, some of them exploit auxiliary information such as social relations [ 12 ], the trustworthi- ness of reviewers [ 1 ], item images [ 40 ], and textual information. Especially, textual or linguistic information such as reviews are com- monly available, and many text-aided recommendation algorithms have been introduced [6, 9, 10, 36, 45]."},{"role":"assistant","content":"Conclusion: In this paper, we propose DaRE, a novel domain adaptation method utilizing review texts from multiple domains for a knowledge trans- fer. Compared to previous approaches, our method is able to capture domain-invariant and domain-specific information of different cat- egories with the aid of domain-aware feature extraction. Moreover, we suggest the use of a review encoder, to better represent the attribute of reviews that the users will generate after purchasing a specific item. Extensive experiments and ablation studies on real datasets confirm the superiority of our method which is indepen- dent of users and items of different domains."}]
Empirical Review of Automated Analysis Tools on 47,587 Ethereum Smart Contracts
Blockchain technology has been receiving considerable attention from industry and academia, for it promises to disrupt the digital online world by enabling a democratic, open, and scalable digital economy based on decentralized distributed consensus without the intervention of third-party trusted authorities. Among the currently available blockchain-based platforms, Ethereum [ 8 ] is one of the most popular, mainly because it enables developers to write dis- tributed applications (Dapps) based on smart contracts — programs that are executed across a decentralised network of nodes. The main
In this paper, we presented an empirical evaluation of 9 static anal- ysis tools on 69 annotated vulnerable contracts and on 47,518 con- tracts taken from the Ethereum’s network. The goal of this ex- periment was to obtain an overview of the current state of static analysis tools for Ethereum smart contracts. During this empirical evaluation, we considered all available smart contracts analysis tools and all the available Ethereum contracts that have at least one transaction. We used the DASP10 category of smart contract vulnerabilities as the reference to classify vulnerabilities.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Empirical Review of Automated Analysis Tools on 47,587 Ethereum Smart Contracts* and the introduction:Blockchain technology has been receiving considerable attention from industry and academia, for it promises to disrupt the digital online world by enabling a democratic, open, and scalable digital economy based on decentralized distributed consensus without the intervention of third-party trusted authorities. Among the currently available blockchain-based platforms, Ethereum [ 8 ] is one of the most popular, mainly because it enables developers to write dis- tributed applications (Dapps) based on smart contracts — programs that are executed across a decentralised network of nodes. The main"},{"role":"assistant","content":"Conclusion: In this paper, we presented an empirical evaluation of 9 static anal- ysis tools on 69 annotated vulnerable contracts and on 47,518 con- tracts taken from the Ethereum’s network. The goal of this ex- periment was to obtain an overview of the current state of static analysis tools for Ethereum smart contracts. During this empirical evaluation, we considered all available smart contracts analysis tools and all the available Ethereum contracts that have at least one transaction. We used the DASP10 category of smart contract vulnerabilities as the reference to classify vulnerabilities."}]
Distributed Machine Learning on Mobile Devices: A Survey
Nowadays, machine learning, as well as deep learning, has become a hot topic and attracts great attention from both academia and industry. The core idea of machine learning is to use large amounts of data to fit a model that can generalize well to unseen inputs. As the increase of data size and model complexity, it becomes harder for a single server to accomplish a machine learning task. To address the problem, distributed machine learning is developed. A typical distributed machine learning task is done through the cooperation of multiple servers. They communicate with each other to transfer essential information during training via shared data buses or a fast
In this survey paper, we have introduced the development of machine learning in recent years, from traditional machine learning to deep learning and followed by distributed machine learning and mobile learning. We have discussed their purposes and shown the necessity of mobile distributed machine learning in this information era. After that, the general properties of mobile distributed machine learning have been claimed, and a clear task definition for it has been given. To explain the architecture of mobile distributed machine learning, we have divided the whole learning process into three steps and listed possible solutions for them. Features of the solutions have been clearly compared and summarized in Table 1 and Table 2. When trying to apply mobile distributed machine learning to a particular task, users and developers can refer to this comparison as guidelines. Federated learning, as an example of mobile distributed machine learning whose main purpose is privacy preservation, has also been mentioned and its developing progress has been introduced. We have also discussed the future directions of mobile distributed machine learning research. In general, there still exists many challenges if we want to implement mobile distributed machine learning on popular real-life applications, which suggests that more efforts should be paid on the further improvement of this system, including aspects such as efficiency, privacy and robustness.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Distributed Machine Learning on Mobile Devices: A Survey* and the introduction:Nowadays, machine learning, as well as deep learning, has become a hot topic and attracts great attention from both academia and industry. The core idea of machine learning is to use large amounts of data to fit a model that can generalize well to unseen inputs. As the increase of data size and model complexity, it becomes harder for a single server to accomplish a machine learning task. To address the problem, distributed machine learning is developed. A typical distributed machine learning task is done through the cooperation of multiple servers. They communicate with each other to transfer essential information during training via shared data buses or a fast"},{"role":"assistant","content":"Conclusion: In this survey paper, we have introduced the development of machine learning in recent years, from traditional machine learning to deep learning and followed by distributed machine learning and mobile learning. We have discussed their purposes and shown the necessity of mobile distributed machine learning in this information era. After that, the general properties of mobile distributed machine learning have been claimed, and a clear task definition for it has been given. To explain the architecture of mobile distributed machine learning, we have divided the whole learning process into three steps and listed possible solutions for them. Features of the solutions have been clearly compared and summarized in Table 1 and Table 2. When trying to apply mobile distributed machine learning to a particular task, users and developers can refer to this comparison as guidelines. Federated learning, as an example of mobile distributed machine learning whose main purpose is privacy preservation, has also been mentioned and its developing progress has been introduced. We have also discussed the future directions of mobile distributed machine learning research. In general, there still exists many challenges if we want to implement mobile distributed machine learning on popular real-life applications, which suggests that more efforts should be paid on the further improvement of this system, including aspects such as efficiency, privacy and robustness."}]
An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos
Unsupervised representation learning has become an impor- tant domain with the advent of deep generative models which include the variational autoencoder (VAE), generative adver- sarial networks (GANs), Long Short Term memory networks (LSTMs), and others. Anomaly detection is a well known sub- domain of unsupervised learning in the machine learning and data mining community. Anomaly detection for images and videos are challenging due to their high dimensional structure.
In this review paper we have focused on categorizing the dif- ferent unsupervised learning models for the task of anomaly detection in videos into three classes based on the prior information used to build the representations to characterize anomalies.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos* and the introduction:Unsupervised representation learning has become an impor- tant domain with the advent of deep generative models which include the variational autoencoder (VAE), generative adver- sarial networks (GANs), Long Short Term memory networks (LSTMs), and others. Anomaly detection is a well known sub- domain of unsupervised learning in the machine learning and data mining community. Anomaly detection for images and videos are challenging due to their high dimensional structure."},{"role":"assistant","content":"Conclusion: In this review paper we have focused on categorizing the dif- ferent unsupervised learning models for the task of anomaly detection in videos into three classes based on the prior information used to build the representations to characterize anomalies."}]
Data-Driven Decision Making in COVID-19 Response: A Survey
The COVID-19 pandemic is an ongoing epidemic of the coronavirus disease [ 21 ]. The epidemic was declared a public health emergency of international concern by the World Health Organization (WHO) and it is impacting all aspects of life. At the same time, the COVID-19 epidemic is also generating a large amount of data that could be used to effectively guide decision making in the COVID-19 response [ 79 ]. These emergency responses include the detection, prevention and control of the disease, and the recovery from global economic disruption, which is potentially the largest global recession since the Great Depression [75, 82].
Data-driven decision making has been demonstrated to be both important and effective for the COVID-19 response. Different countries and regions have implemented many policies that are data-driven, such as prevention and control policies, psychological counselling policies, financial aid policies, and resumption/re- opening policies. In this paper we have summarized these policies including examples from all around the world. From different kinds of data related to COVID-19, policy makers have developed large volumes of useful information, and based on this information and machine learning algorithms, data are guiding us to better and more appropriate choices. We first discussed COVID-19 related data and some prevention and control policies and then described the psychological counselling policies driven by COVID-19 data. In the financial aid section, we have discussed the protection of lives as well as economic recovery. Following on, we introduced in detail policies in the resumption of both work and school. Finally, we listed the current challenges and open issues, including data collection and quality, complex data analysis, and fairness of decision making.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Data-Driven Decision Making in COVID-19 Response: A Survey* and the introduction:The COVID-19 pandemic is an ongoing epidemic of the coronavirus disease [ 21 ]. The epidemic was declared a public health emergency of international concern by the World Health Organization (WHO) and it is impacting all aspects of life. At the same time, the COVID-19 epidemic is also generating a large amount of data that could be used to effectively guide decision making in the COVID-19 response [ 79 ]. These emergency responses include the detection, prevention and control of the disease, and the recovery from global economic disruption, which is potentially the largest global recession since the Great Depression [75, 82]."},{"role":"assistant","content":"Conclusion: Data-driven decision making has been demonstrated to be both important and effective for the COVID-19 response. Different countries and regions have implemented many policies that are data-driven, such as prevention and control policies, psychological counselling policies, financial aid policies, and resumption/re- opening policies. In this paper we have summarized these policies including examples from all around the world. From different kinds of data related to COVID-19, policy makers have developed large volumes of useful information, and based on this information and machine learning algorithms, data are guiding us to better and more appropriate choices. We first discussed COVID-19 related data and some prevention and control policies and then described the psychological counselling policies driven by COVID-19 data. In the financial aid section, we have discussed the protection of lives as well as economic recovery. Following on, we introduced in detail policies in the resumption of both work and school. Finally, we listed the current challenges and open issues, including data collection and quality, complex data analysis, and fairness of decision making."}]
A Review of Machine Learning Applications in Fuzzing
Fuzzing is a technique in which a large number of gener- ated inputs, both valid and invalid, are fed into a program to search for flaws and vulnerabilities. Fuzzers, or automated tools to perform fuzzing, have played an important role in quality assurance, system administration, and vulnerabil- ity assessment over the last three decades [ 15 , 16 , 34 , 48 ]. Modern fuzzers now incorporate techniques from other dis- ciplines; in this survey, we explore how some modern fuzzers incorporate different types of machine learning (ML). We specifically focus on fuzzers used for vulnerability assess- ment due to their widespread use.
In this survey, we discussed how machine learning (ML) has been applied to fuzzing. Because fuzzing problems lend them- selves more naturally to unsupervised and reinforcement learning techniques, supervised learning is rarely used to sup- port the fuzzing process. ML is most often used to support the Generate Inputs stage of the fuzzing process. Unsupervised methods currently offer the most benefit with tools such as AFL making unsupervised algorithms part of their workflow. In contrast, reinforcement learning and deep learning are being explored for possible improvements but are not yet part of any standard fuzzing tool. ML has also been applied to analyze Interesting Program States during post-fuzzing, helping to triage crashes and support root cause analysis. However, ML has not been applied to the Select Inputs stage of the fuzzing process, possibly because this stage is not a major bottleneck. Additionally ML has not been applied to evaluating reproducibility of crashes during post-fuzzing. The lack of ML research in certain portions of the fuzzing process may be due to ML being less understandable than other heuristic approaches, or it may be due to a lack of accessible training data.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Review of Machine Learning Applications in Fuzzing* and the introduction:Fuzzing is a technique in which a large number of gener- ated inputs, both valid and invalid, are fed into a program to search for flaws and vulnerabilities. Fuzzers, or automated tools to perform fuzzing, have played an important role in quality assurance, system administration, and vulnerabil- ity assessment over the last three decades [ 15 , 16 , 34 , 48 ]. Modern fuzzers now incorporate techniques from other dis- ciplines; in this survey, we explore how some modern fuzzers incorporate different types of machine learning (ML). We specifically focus on fuzzers used for vulnerability assess- ment due to their widespread use."},{"role":"assistant","content":"Conclusion: In this survey, we discussed how machine learning (ML) has been applied to fuzzing. Because fuzzing problems lend them- selves more naturally to unsupervised and reinforcement learning techniques, supervised learning is rarely used to sup- port the fuzzing process. ML is most often used to support the Generate Inputs stage of the fuzzing process. Unsupervised methods currently offer the most benefit with tools such as AFL making unsupervised algorithms part of their workflow. In contrast, reinforcement learning and deep learning are being explored for possible improvements but are not yet part of any standard fuzzing tool. ML has also been applied to analyze Interesting Program States during post-fuzzing, helping to triage crashes and support root cause analysis. However, ML has not been applied to the Select Inputs stage of the fuzzing process, possibly because this stage is not a major bottleneck. Additionally ML has not been applied to evaluating reproducibility of crashes during post-fuzzing. The lack of ML research in certain portions of the fuzzing process may be due to ML being less understandable than other heuristic approaches, or it may be due to a lack of accessible training data."}]
A Survey on Open Information Extraction
Information extraction (IE) turns the unstructured information expressed in natural language text into a structured representation (Jurafsky and Martin, 2009) in the form of relational tuples consisting of a set of arguments and a phrase denoting a semantic relation between them: ⟨ arg1 ; rel ; arg2 ⟩ . Traditional ap- proaches to IE focus on answering narrow, well-defined requests over a predefined set of target relations on small, homogeneous corpora. To do so, they take as input the target relation along with hand-crafted extraction patterns or patterns learned from hand-labeled training examples (e.g., Agichtein and Gravano (2000), Brin (1999), Riloff and Jones (1999)). Consequently, shifting to a new domain requires the user to not only name the target relations, but also to manually define new extraction rules or to annotate new training data by hand. Thus, those systems rely on extensive human involvement. In order to reduce the manual effort required by IE approaches, Banko et al. (2007) introduced a new extraction paradigm: Open IE. Unlike traditional IE methods, Open IE is not limited to a small set of target relations known in advance, but rather extracts all types of relations found in a text. In that way, it facilitates the domain- independent discovery of relations extracted from text and scales to large, heterogeneous corpora such as the Web. Hence, Banko et al. (2007) identified three major challenges for Open IE systems:
We presented an overview of the various methods that were proposed to solve the task of Open IE. We classified them into the four categories of learning-based, rule-based, clause-based systems and ap- proaches capturing inter-proposition relationships, thereby showing their evolution over time, as well as the specific problems they tackle. Moreover, we described the approaches that were used to assess the performance of the proposed Open IE systems, while depicting the gaps in the evaluation procedures that are commonly applied to date. Finally, we identified directions for future work.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Open Information Extraction* and the introduction:Information extraction (IE) turns the unstructured information expressed in natural language text into a structured representation (Jurafsky and Martin, 2009) in the form of relational tuples consisting of a set of arguments and a phrase denoting a semantic relation between them: ⟨ arg1 ; rel ; arg2 ⟩ . Traditional ap- proaches to IE focus on answering narrow, well-defined requests over a predefined set of target relations on small, homogeneous corpora. To do so, they take as input the target relation along with hand-crafted extraction patterns or patterns learned from hand-labeled training examples (e.g., Agichtein and Gravano (2000), Brin (1999), Riloff and Jones (1999)). Consequently, shifting to a new domain requires the user to not only name the target relations, but also to manually define new extraction rules or to annotate new training data by hand. Thus, those systems rely on extensive human involvement. In order to reduce the manual effort required by IE approaches, Banko et al. (2007) introduced a new extraction paradigm: Open IE. Unlike traditional IE methods, Open IE is not limited to a small set of target relations known in advance, but rather extracts all types of relations found in a text. In that way, it facilitates the domain- independent discovery of relations extracted from text and scales to large, heterogeneous corpora such as the Web. Hence, Banko et al. (2007) identified three major challenges for Open IE systems:"},{"role":"assistant","content":"Conclusion: We presented an overview of the various methods that were proposed to solve the task of Open IE. We classified them into the four categories of learning-based, rule-based, clause-based systems and ap- proaches capturing inter-proposition relationships, thereby showing their evolution over time, as well as the specific problems they tackle. Moreover, we described the approaches that were used to assess the performance of the proposed Open IE systems, while depicting the gaps in the evaluation procedures that are commonly applied to date. Finally, we identified directions for future work."}]
Software Testing Process Models Benefits & Drawbacks: a Systematic Literature Review
Software testing is a key phase of the software devel- opment process, as it represents the process of quality val- idation and verification of a software product [79]. Such phase is even more crucial nowadays, as software has be- come increasingly complex, mission-, safety- critical, and essential in daily activities, calling for an increase in qual- ity [12, 46, 59].
Software testing is a key part of the software develop- ment process. The aim of this review was to provide an overview of existing software models that can be applied to improve the overall testing process. As the methodology for conducting the review, we used the Systematic Literature Review (SLR), to find empirical studies reporting benefits and drawbacks from the application of the models. Over- all, 17 testing models were identified during the process. They differ in the area of usage and model representation. Most of the models are described as universally applicable, but several approaches were customized to satisfy specific requirements of domains such as embedded software, auto- mated testing or military systems. We focused also on the aspects that the testing models helped to improve in the con- crete cases. Most of the empirical studies describe the adap- tation of a model to the testing process and thus standard- ization of the process as the main improvement. The next important supported aspect is the improvement of product quality and several studies focus on specific aspects such as measurements or defects detection / reduction.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Software Testing Process Models Benefits & Drawbacks: a Systematic Literature Review* and the introduction:Software testing is a key phase of the software devel- opment process, as it represents the process of quality val- idation and verification of a software product [79]. Such phase is even more crucial nowadays, as software has be- come increasingly complex, mission-, safety- critical, and essential in daily activities, calling for an increase in qual- ity [12, 46, 59]."},{"role":"assistant","content":"Conclusion: Software testing is a key part of the software develop- ment process. The aim of this review was to provide an overview of existing software models that can be applied to improve the overall testing process. As the methodology for conducting the review, we used the Systematic Literature Review (SLR), to find empirical studies reporting benefits and drawbacks from the application of the models. Over- all, 17 testing models were identified during the process. They differ in the area of usage and model representation. Most of the models are described as universally applicable, but several approaches were customized to satisfy specific requirements of domains such as embedded software, auto- mated testing or military systems. We focused also on the aspects that the testing models helped to improve in the con- crete cases. Most of the empirical studies describe the adap- tation of a model to the testing process and thus standard- ization of the process as the main improvement. The next important supported aspect is the improvement of product quality and several studies focus on specific aspects such as measurements or defects detection / reduction."}]
An overview of gradient descent optimization algorithms ∗
Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagne’s 2 , caffe’s 3 , and keras’ 4 documentation). These algorithms, however, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by.
In this blog post, we have initially looked at the three variants of gradient descent, among which mini-batch gradient descent is the most popular. We have then investigated algorithms that are most commonly used for optimizing SGD: Momentum, Nesterov accelerated gradient, Adagrad, Adadelta, RMSprop, Adam, as well as different algorithms to optimize asynchronous SGD. Finally, we’ve considered other strategies to improve SGD such as shuffling and curriculum learning, batch normalization, and early stopping.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*An overview of gradient descent optimization algorithms ∗* and the introduction:Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagne’s 2 , caffe’s 3 , and keras’ 4 documentation). These algorithms, however, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by."},{"role":"assistant","content":"Conclusion: In this blog post, we have initially looked at the three variants of gradient descent, among which mini-batch gradient descent is the most popular. We have then investigated algorithms that are most commonly used for optimizing SGD: Momentum, Nesterov accelerated gradient, Adagrad, Adadelta, RMSprop, Adam, as well as different algorithms to optimize asynchronous SGD. Finally, we’ve considered other strategies to improve SGD such as shuffling and curriculum learning, batch normalization, and early stopping."}]
Interpretation of Black Box NLP Models: A Survey
Machine learning (ML) is now ubiquitous in the current era. Some of the ML models, especially deep learning-based models, achieve near-human accuracy. It has led to the adoption of ML into several areas, including critical areas like health, financial markets, criminal justice. ( Lipton , 2016 ). It had raised an important question regarding the black-box nature of the ML models.
This survey has presented an overview of inter- pretability methods from a causal and non-causal perspective. In this survey, we have presented a brief overview of the different approaches and some theoretical discussion around those methods. We have presented the representative paper exam- ples along with the specific NLP tasks they want to highlight.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Interpretation of Black Box NLP Models: A Survey* and the introduction:Machine learning (ML) is now ubiquitous in the current era. Some of the ML models, especially deep learning-based models, achieve near-human accuracy. It has led to the adoption of ML into several areas, including critical areas like health, financial markets, criminal justice. ( Lipton , 2016 ). It had raised an important question regarding the black-box nature of the ML models."},{"role":"assistant","content":"Conclusion: This survey has presented an overview of inter- pretability methods from a causal and non-causal perspective. In this survey, we have presented a brief overview of the different approaches and some theoretical discussion around those methods. We have presented the representative paper exam- ples along with the specific NLP tasks they want to highlight."}]
A Survey on Dialogue Summarization: Recent Advances and New Frontiers
Dialogue summarization aims to distill the most important information from a dialogue into a shorter passage, which can help people quickly capture the highlights of a semi- structured and multi-participant dialogue without reviewing the complex dialogue context [Gurevych and Strube, 2004].
This article presents the first comprehensive survey on the progress of dialogue summarization. We thoroughly summa- rize the existing works, highlight their challenges and provide leaderboards. Furthermore, we shed light on some new trends in this research field. We hope this survey can facilitate the research of the dialogue summarization.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Dialogue Summarization: Recent Advances and New Frontiers* and the introduction:Dialogue summarization aims to distill the most important information from a dialogue into a shorter passage, which can help people quickly capture the highlights of a semi- structured and multi-participant dialogue without reviewing the complex dialogue context [Gurevych and Strube, 2004]."},{"role":"assistant","content":"Conclusion: This article presents the first comprehensive survey on the progress of dialogue summarization. We thoroughly summa- rize the existing works, highlight their challenges and provide leaderboards. Furthermore, we shed light on some new trends in this research field. We hope this survey can facilitate the research of the dialogue summarization."}]
Trust your neighbors: A comprehensive survey of neighborhood-based methods for recommender systems
The appearance and growth of online markets has had a considerable impact on the habits of consumers, providing them access to a greater variety of products and information on these goods. While this freedom of purchase has made online com- merce into a multi-billion dollar industry, it also made it more di ffi cult for consumers to select the products that best fit their needs. One of the main solutions proposed for this information overload problem are recommender systems, which provide au- tomated and personalized suggestions of products to consumers.
One of the earliest approaches proposed for the task item recommendation, neighbor- hood-based recommendation still ranks among the most popular methods for this problem. Although quite simple to describe and implement, this recommendation approach has several important advantages, including its ability to explain a recom- mendation with the list of the neighbors used, its computational and space e ffi ciency which allows it to scale to large recommender systems, and its marked stability in an online setting where new users and items are constantly added. Another of its strengths is its potential to make serendipitous recommendations that can lead users to the discovery of unexpected, yet very interesting items.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Trust your neighbors: A comprehensive survey of neighborhood-based methods for recommender systems* and the introduction:The appearance and growth of online markets has had a considerable impact on the habits of consumers, providing them access to a greater variety of products and information on these goods. While this freedom of purchase has made online com- merce into a multi-billion dollar industry, it also made it more di ffi cult for consumers to select the products that best fit their needs. One of the main solutions proposed for this information overload problem are recommender systems, which provide au- tomated and personalized suggestions of products to consumers."},{"role":"assistant","content":"Conclusion: One of the earliest approaches proposed for the task item recommendation, neighbor- hood-based recommendation still ranks among the most popular methods for this problem. Although quite simple to describe and implement, this recommendation approach has several important advantages, including its ability to explain a recom- mendation with the list of the neighbors used, its computational and space e ffi ciency which allows it to scale to large recommender systems, and its marked stability in an online setting where new users and items are constantly added. Another of its strengths is its potential to make serendipitous recommendations that can lead users to the discovery of unexpected, yet very interesting items."}]
Emotion in Reinforcement Learning Agents and Robots: A Survey.
This survey systematically covers the literature on computational models of emo- tion in reinforcement learning (RL) agents. Computational models of emotions are usually grounded in the agent decision-making architecture. In this work we focus on emotion models in a successful learning architecture: reinforcement learning, i.e. agents optimizing some reward function in a Markov Decision Process (MDP) formulation. To directly avoid confusion: the topic does not imply the agent should
This article surveyed emotion modelling in reinforcement learning (RL) agents. The literature has been structured according to the intrinsically motivated RL framework. We conclude by identifying the main benefits encountered in this work for the machine learning (ML), human-robot interaction (HRI), and affective mod- elling (AM) communities. For machine learning, emotion may benefit learning effi- ciency by providing inspiration for intrinsic motivation, exploration and for meta- parameter tuning. The current results should stimulate further cross-over between (intrinsic) motivation, model-based RL and emotion-RL research. For HRI re- search, emotions obviously are important for social interaction. More work should be done on implementing emotion models in interactive reinforcement learning algorithms, for which the survey presents a practical guideline on implementing emotions in RL agents. For affective modelling, we conclude that cognitive theories (like appraisal theory) can well be expressed in RL agents. The general benefits of RL agents (they require little assumptions, are easily applicable to all kinds of domains, and allow for learning) make them a promising test-bed for affec- tive modelling research. This survey identifies opportunities for future work with respect to implementation and evaluation of emotion models in RL agents.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Emotion in Reinforcement Learning Agents and Robots: A Survey.* and the introduction:This survey systematically covers the literature on computational models of emo- tion in reinforcement learning (RL) agents. Computational models of emotions are usually grounded in the agent decision-making architecture. In this work we focus on emotion models in a successful learning architecture: reinforcement learning, i.e. agents optimizing some reward function in a Markov Decision Process (MDP) formulation. To directly avoid confusion: the topic does not imply the agent should"},{"role":"assistant","content":"Conclusion: This article surveyed emotion modelling in reinforcement learning (RL) agents. The literature has been structured according to the intrinsically motivated RL framework. We conclude by identifying the main benefits encountered in this work for the machine learning (ML), human-robot interaction (HRI), and affective mod- elling (AM) communities. For machine learning, emotion may benefit learning effi- ciency by providing inspiration for intrinsic motivation, exploration and for meta- parameter tuning. The current results should stimulate further cross-over between (intrinsic) motivation, model-based RL and emotion-RL research. For HRI re- search, emotions obviously are important for social interaction. More work should be done on implementing emotion models in interactive reinforcement learning algorithms, for which the survey presents a practical guideline on implementing emotions in RL agents. For affective modelling, we conclude that cognitive theories (like appraisal theory) can well be expressed in RL agents. The general benefits of RL agents (they require little assumptions, are easily applicable to all kinds of domains, and allow for learning) make them a promising test-bed for affec- tive modelling research. This survey identifies opportunities for future work with respect to implementation and evaluation of emotion models in RL agents."}]
Handcrafted Feature Selection Techniques for Pattern Recognition: A Survey
In machine learning, representing and organizing data into meaningful information is a fundamental key that allows a classifier, such as neural networks, to determine the belongingness of observations into classes [1]. Observed data can be obtained from sources, in different domains, such as robot sensors, digital cameras, medical instruments, and digital games agents, where the main objective is to allow a computer program to perform decision-making [2]. Organize and representing data is not a trivial task, where the main problem is the lack of a standard and fast way to evaluate and select a minimal set of information to represent an observed object that will maximize a classifier's accuracy [3]. In order to tackle that problem, feature selection tech- niques are used, where meaningful information, obtained from observed objects on the task's domain is selected and stored inside feature vectors used as a classifier's input.
Feature selection and extraction is a way to reduce data dimensionality and facilitate the recognition of objects. Al- though all the feature selection methods aim in improving the identification of relevant data, they are applied in differ- ent ways, that may have a high or low cost depending on the problem. Filter approaches act as a pre-processing step and do not rely on a classifier's response. Filter methods are considered inefficient to select information when compared to Wrapper methods. By contrast, Wrappers selects features by interacting with a classifier, which may have better accuracy in identifying features. However, Wrapped methods may not be recommended for high dimensional data. An Embedded method may be an interesting way in getting advantages of both, Filters and Wrapper methods since it is a hybrid approach. Therefore, the presented feature selection methods can be accurate and efficient if applied considering their positives and negatives, finding which one fits best the problem's domain may be the hardest task.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Handcrafted Feature Selection Techniques for Pattern Recognition: A Survey* and the introduction:In machine learning, representing and organizing data into meaningful information is a fundamental key that allows a classifier, such as neural networks, to determine the belongingness of observations into classes [1]. Observed data can be obtained from sources, in different domains, such as robot sensors, digital cameras, medical instruments, and digital games agents, where the main objective is to allow a computer program to perform decision-making [2]. Organize and representing data is not a trivial task, where the main problem is the lack of a standard and fast way to evaluate and select a minimal set of information to represent an observed object that will maximize a classifier's accuracy [3]. In order to tackle that problem, feature selection tech- niques are used, where meaningful information, obtained from observed objects on the task's domain is selected and stored inside feature vectors used as a classifier's input."},{"role":"assistant","content":"Conclusion: Feature selection and extraction is a way to reduce data dimensionality and facilitate the recognition of objects. Al- though all the feature selection methods aim in improving the identification of relevant data, they are applied in differ- ent ways, that may have a high or low cost depending on the problem. Filter approaches act as a pre-processing step and do not rely on a classifier's response. Filter methods are considered inefficient to select information when compared to Wrapper methods. By contrast, Wrappers selects features by interacting with a classifier, which may have better accuracy in identifying features. However, Wrapped methods may not be recommended for high dimensional data. An Embedded method may be an interesting way in getting advantages of both, Filters and Wrapper methods since it is a hybrid approach. Therefore, the presented feature selection methods can be accurate and efficient if applied considering their positives and negatives, finding which one fits best the problem's domain may be the hardest task."}]
A Survey on Anonymous Communication Systems with a Focus on Dining Cryptographers Networks
The continuously growing collection of data by pervasive computing techniques and great advances in com- munication during the current information age are bringing many benefits to society. This encompasses transformative changes and opportunities created in many aspects of daily life, e.g. healthcare, transporta- tion, education and social interaction [36, 98]. However, much of these collected data might be sensitive or contain personal information. Therefore, their collection and transmission poses serious privacy concerns. These could prevent a wider incorporation of new technologies into daily lives [118, 32].
Although data encryption algorithms protect the content of communications from unauthorised access, traffic analysis methods can be used to extract valuable information about communicating parties. Anonymous Communication Systems (ACSs) are able to additionally provide strong privacy properties like anonymity and ideally unobservability. Hence, the growing trend of privacy-sensitive data collection, for instance via IoT devices, and the increasing usage of wireless communications in data transmission require us to rethink to apply the privacy protections offered by ACSs. The ACSs are proposed to prevent traffic-analysis attacks by making the traffic of the network’s users indiscernible and their well-known methods to this end commonly base on two concepts: Mix networks and DCNs. The latter, as we visualised, offers better privacy protections. However, both intrinsically introduce significant computation and communications overhead and are less scalable, which often make them initially appear impractical for real-world scenarios.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Anonymous Communication Systems with a Focus on Dining Cryptographers Networks* and the introduction:The continuously growing collection of data by pervasive computing techniques and great advances in com- munication during the current information age are bringing many benefits to society. This encompasses transformative changes and opportunities created in many aspects of daily life, e.g. healthcare, transporta- tion, education and social interaction [36, 98]. However, much of these collected data might be sensitive or contain personal information. Therefore, their collection and transmission poses serious privacy concerns. These could prevent a wider incorporation of new technologies into daily lives [118, 32]."},{"role":"assistant","content":"Conclusion: Although data encryption algorithms protect the content of communications from unauthorised access, traffic analysis methods can be used to extract valuable information about communicating parties. Anonymous Communication Systems (ACSs) are able to additionally provide strong privacy properties like anonymity and ideally unobservability. Hence, the growing trend of privacy-sensitive data collection, for instance via IoT devices, and the increasing usage of wireless communications in data transmission require us to rethink to apply the privacy protections offered by ACSs. The ACSs are proposed to prevent traffic-analysis attacks by making the traffic of the network’s users indiscernible and their well-known methods to this end commonly base on two concepts: Mix networks and DCNs. The latter, as we visualised, offers better privacy protections. However, both intrinsically introduce significant computation and communications overhead and are less scalable, which often make them initially appear impractical for real-world scenarios."}]
Learning to Communicate in Multi-Agent Reinforcement Learning : A Review
In recent years, Multi-Agent Reinforcement Learning has received a lot of interest, with more and more complex algorithms and structures improving the policy of multiple agents in more and more complex environments [1]. In Multi-agent settings, agents take actions and learn from their rewards simultaneously in the same environment. These interactions could be either competitive [2] or coop- erative or a mix of the two. This review mainly tackles the case of partially observable multi-agent environments. In real world environments it is rare that the full state of the system can be provided to the agent or even be determined. In this kind of environments, agents may hold very valuable information for the decision making of other agents. In the case of common reward, agents would need to communicate seamlessly with the other agents in order to pass that information, coordinate their behaviours and increase the common reward [3].
We reviewed the main recent algorithms addressing the issue of communication and cooperation between multiple agents in a Reinforcement Learning environment. We also presented the current efforts in interpreting the language learned by these algorithms, and proposed a novel metric to evaluate the relevance of a generated communication strategy.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Learning to Communicate in Multi-Agent Reinforcement Learning : A Review* and the introduction:In recent years, Multi-Agent Reinforcement Learning has received a lot of interest, with more and more complex algorithms and structures improving the policy of multiple agents in more and more complex environments [1]. In Multi-agent settings, agents take actions and learn from their rewards simultaneously in the same environment. These interactions could be either competitive [2] or coop- erative or a mix of the two. This review mainly tackles the case of partially observable multi-agent environments. In real world environments it is rare that the full state of the system can be provided to the agent or even be determined. In this kind of environments, agents may hold very valuable information for the decision making of other agents. In the case of common reward, agents would need to communicate seamlessly with the other agents in order to pass that information, coordinate their behaviours and increase the common reward [3]."},{"role":"assistant","content":"Conclusion: We reviewed the main recent algorithms addressing the issue of communication and cooperation between multiple agents in a Reinforcement Learning environment. We also presented the current efforts in interpreting the language learned by these algorithms, and proposed a novel metric to evaluate the relevance of a generated communication strategy."}]
A Survey on Ransomware: Evolution, Taxonomy, and Defense Solutions
Recent years have witnessed a dramatic growth in the number of incidents a unique malware strain involved in, namely ransomware . This notorious malware strain has been targeting not only ordi- nary end users, but also governments and business organizations in almost any sector. Numerous incidents include Fortune 500 companies [181], banks [51], cloud providers [52], chip manufactur- ers [156], cruise operators [173], threat monitoring services [93], governments [5, 190], medical centers and hospitals [55], schools [6], universities [136], and even police departments [71]. It has been predicted that the total loss of organizations due to ransomware will be around \$20 billion in
In this paper, we provided a comprehensive survey of ransomware and ransomware defense re- search with respect to PCs/workstations, mobile devices and IoT/CPS environments. We presented a detailed overview on how ransomware evolved in time, thoroughly analyzed the key building blocks of ransomware, proposed a taxonomy of notable ransomware families, and provided an ex- tensive overview of ransomware defense research including analysis, detection and recovery tech- niques with respect to various platforms. In addition to these, we derived a list of open research problems that need to be addressed in future ransomware research and practice. As ransomware is already prevalent in PCs/workstations, is becoming more prevalent in mobile devices, and has already hit IoT/CPS recently, and will likely grow further in the IoT/CPS domain very quickly, we believe that this paper will play a crucial role in understanding ransomware research with respect to target platforms and motivating further research.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Ransomware: Evolution, Taxonomy, and Defense Solutions* and the introduction:Recent years have witnessed a dramatic growth in the number of incidents a unique malware strain involved in, namely ransomware . This notorious malware strain has been targeting not only ordi- nary end users, but also governments and business organizations in almost any sector. Numerous incidents include Fortune 500 companies [181], banks [51], cloud providers [52], chip manufactur- ers [156], cruise operators [173], threat monitoring services [93], governments [5, 190], medical centers and hospitals [55], schools [6], universities [136], and even police departments [71]. It has been predicted that the total loss of organizations due to ransomware will be around \$20 billion in"},{"role":"assistant","content":"Conclusion: In this paper, we provided a comprehensive survey of ransomware and ransomware defense re- search with respect to PCs/workstations, mobile devices and IoT/CPS environments. We presented a detailed overview on how ransomware evolved in time, thoroughly analyzed the key building blocks of ransomware, proposed a taxonomy of notable ransomware families, and provided an ex- tensive overview of ransomware defense research including analysis, detection and recovery tech- niques with respect to various platforms. In addition to these, we derived a list of open research problems that need to be addressed in future ransomware research and practice. As ransomware is already prevalent in PCs/workstations, is becoming more prevalent in mobile devices, and has already hit IoT/CPS recently, and will likely grow further in the IoT/CPS domain very quickly, we believe that this paper will play a crucial role in understanding ransomware research with respect to target platforms and motivating further research."}]
Deep Learning for Video-Text Retrieval: a Review
With the advent of the big data era, video media software ( e . g ., YouTuBe, TikTok, and Instagram) has become the main way for human beings to learn and entertain. Facing the explosive growth of multimedia information, people urgently need to find efficient ways to quickly search for an item that meets the user’s needs. In particular, the number of videos with different lengths has soared rapidly in recent years, whereas it becomes more time- consuming and difficult to find the target video. Motivated by this, one of the most appropriate solutions is to search for the corresponding video by a language sentence, i . e ., Video-Text Retrieval (VTR).
In this survey, we have provided a comprehen- sive overview of deep learning methods for the video-text retrieval task. We have reviewed recent major advances on video-textual feature extraction, feature matching and objective functions, pre- sented their performance on publicly benchmarked datasets, and analyzed their merits and limitations. Although the VTR task has achieved remarkable progress over recent years, it is worth mentioning that there are still some potential research space. Finally, we have summarized the difficulties and challenges faced by this task and discussed several future research directions, with the expectation to provide new insights for researching video-text retrieval in the future.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Deep Learning for Video-Text Retrieval: a Review* and the introduction:With the advent of the big data era, video media software ( e . g ., YouTuBe, TikTok, and Instagram) has become the main way for human beings to learn and entertain. Facing the explosive growth of multimedia information, people urgently need to find efficient ways to quickly search for an item that meets the user’s needs. In particular, the number of videos with different lengths has soared rapidly in recent years, whereas it becomes more time- consuming and difficult to find the target video. Motivated by this, one of the most appropriate solutions is to search for the corresponding video by a language sentence, i . e ., Video-Text Retrieval (VTR)."},{"role":"assistant","content":"Conclusion: In this survey, we have provided a comprehen- sive overview of deep learning methods for the video-text retrieval task. We have reviewed recent major advances on video-textual feature extraction, feature matching and objective functions, pre- sented their performance on publicly benchmarked datasets, and analyzed their merits and limitations. Although the VTR task has achieved remarkable progress over recent years, it is worth mentioning that there are still some potential research space. Finally, we have summarized the difficulties and challenges faced by this task and discussed several future research directions, with the expectation to provide new insights for researching video-text retrieval in the future."}]
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Machine learning (ML) models are being increasingly relied on to make decisions in a wide variety of contexts. Sometimes these deci- sions can have serious repercussions for individuals, with many ex- amples to be found in the health, legal or financial sectors (e.g., [4]). In such circumstances, explanations for a model’s output can help engender trust in the decision or help individuals take actions in order to obtain a more desirable outcome later.
In this paper, we presented a review of existing approaches to anal- yse the robustness of two popular classes of post-hoc local ex- plainability methods - feature importance and counterfactual ex- planations. We defined robustness in the context of explainability and introduced a taxonomy to categorise existing approaches into three categories. Specifically, we categorise the present approaches (feature importance and counterfactual explanations) into - meth- ods that perturb inputs, methods that manipulate ML models, and methods that change hyperparameters values. We discussed sev- eral methods from each category and highlighted their key results. Overall, the majority of explainability methods are non-robust and hence employing them to understand models used in safety-critical applications is risky. We also reviewed some recent methods that propose approaches to tackle some of the robustness challenges discussed above. Finally, we presented some prospective research directions to further analyse explainability methods, especially in the context of finance.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on the Robustness of Feature Importance and Counterfactual Explanations* and the introduction:Machine learning (ML) models are being increasingly relied on to make decisions in a wide variety of contexts. Sometimes these deci- sions can have serious repercussions for individuals, with many ex- amples to be found in the health, legal or financial sectors (e.g., [4]). In such circumstances, explanations for a model’s output can help engender trust in the decision or help individuals take actions in order to obtain a more desirable outcome later."},{"role":"assistant","content":"Conclusion: In this paper, we presented a review of existing approaches to anal- yse the robustness of two popular classes of post-hoc local ex- plainability methods - feature importance and counterfactual ex- planations. We defined robustness in the context of explainability and introduced a taxonomy to categorise existing approaches into three categories. Specifically, we categorise the present approaches (feature importance and counterfactual explanations) into - meth- ods that perturb inputs, methods that manipulate ML models, and methods that change hyperparameters values. We discussed sev- eral methods from each category and highlighted their key results. Overall, the majority of explainability methods are non-robust and hence employing them to understand models used in safety-critical applications is risky. We also reviewed some recent methods that propose approaches to tackle some of the robustness challenges discussed above. Finally, we presented some prospective research directions to further analyse explainability methods, especially in the context of finance."}]
A R EVIEW OF V ISUAL O DOMETRY M ETHODS AND I TS A PPLICATIONS FOR A UTONOMOUS D RIVING
Autonomous driving has come a long way since it was first promoted in the DARPA Urban Challenge back in 2007. With major car manufacturers lobbying their technologies in autonomous driving, the ownership of autonomous vehicles is set to rise in the future. Current autonomous vehicles rely on a variety of sensors to achieve self-localisation and obstacle avoidance. These can include a combination of laser scanners, radar, GPS, and camera. However, the installation of sensor arrays on a vehicle greatly increases its cost and complexity. At the same time, the increasing affordability and ubiquity of cameras and high-performance graphics processing units (GPUs) are catalysing the resurgence of image processing and computer vision applications. In other words, these applications that were once computationally expensive, are gradually replacing tasks that were performed using other sensors and methods. These tasks include the motion estimation of the vehicle, where precise odometry is crucial for the accurate localisation of the autonomous vehicle. The odometry problem exists such that conventional GPS sensors are unable to provide the necessary road lane precision (\~3 m), and that it is unable to function indoors such as inside tunnels and buildings. Additionally, standard wheel odometry suffers from accumulating drift errors that increase over time. While the use of sensors such as high precision differential GPS and inertial sensors could alleviate this problem, they are significantly more expensive to purchase than a standard camera setup.
The availability of recent works is greatly contributing towards the solution for the visual odometry problem. We have reviewed the various types of visual odometry methods in relation to their applications for autonomous driving. Both monocular and stereoscopic VO are viable approaches for autonomous driving, whereby the hardware will only differ according to its camera setup. The easy attainability of publicly available datasets for autonomous driving such as the well-known KITTI dataset is also a major contributing factor that encourages works in VO. By reviewing recent works pertaining to VO that are not more than three years old, we can confirm that the current VO trend is steering towards a low-cost, high accuracy model that encourages applications on low-powered hardware such as embedded computers. Although the availability of datasets is promoting the proposal of new VO algorithms, we have also observed a current shortage of real-world VO applications, especially in autonomous road vehicles. Many proposed methods stop short of a practical implementation, and only evaluated their algorithms on datasets. We have observed that results from a dataset evaluation often deviates from a complete indication of how the algorithm will perform in a local environment, thereby highlighting our necessity for a practical VO application. We have deduced from our observations that a couple of factors could attribute to this issue. Firstly, the attainability of test beds for autonomous vehicles is often associated with high costs and complex local legislation, as it usually involves the purchase and retrofitting of an actual road vehicle, especially when considering the higher probability of accidents when new algorithms are tested. On the other hand, while embedded computers are now capable of efficient parallel processing, they are still often unable to provide the necessary computing performance required for visual navigation on the test bed, especially when we compare them against workstation-class GPUs that are typically used to evaluate newly proposed algorithms. Nonetheless, forthcoming high-performance mobile computers and an increasing public recognition towards autonomous vehicles will undoubtedly encourage practical applications of visual odometry in the near future.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A R EVIEW OF V ISUAL O DOMETRY M ETHODS AND I TS A PPLICATIONS FOR A UTONOMOUS D RIVING* and the introduction:Autonomous driving has come a long way since it was first promoted in the DARPA Urban Challenge back in 2007. With major car manufacturers lobbying their technologies in autonomous driving, the ownership of autonomous vehicles is set to rise in the future. Current autonomous vehicles rely on a variety of sensors to achieve self-localisation and obstacle avoidance. These can include a combination of laser scanners, radar, GPS, and camera. However, the installation of sensor arrays on a vehicle greatly increases its cost and complexity. At the same time, the increasing affordability and ubiquity of cameras and high-performance graphics processing units (GPUs) are catalysing the resurgence of image processing and computer vision applications. In other words, these applications that were once computationally expensive, are gradually replacing tasks that were performed using other sensors and methods. These tasks include the motion estimation of the vehicle, where precise odometry is crucial for the accurate localisation of the autonomous vehicle. The odometry problem exists such that conventional GPS sensors are unable to provide the necessary road lane precision (\~3 m), and that it is unable to function indoors such as inside tunnels and buildings. Additionally, standard wheel odometry suffers from accumulating drift errors that increase over time. While the use of sensors such as high precision differential GPS and inertial sensors could alleviate this problem, they are significantly more expensive to purchase than a standard camera setup."},{"role":"assistant","content":"Conclusion: The availability of recent works is greatly contributing towards the solution for the visual odometry problem. We have reviewed the various types of visual odometry methods in relation to their applications for autonomous driving. Both monocular and stereoscopic VO are viable approaches for autonomous driving, whereby the hardware will only differ according to its camera setup. The easy attainability of publicly available datasets for autonomous driving such as the well-known KITTI dataset is also a major contributing factor that encourages works in VO. By reviewing recent works pertaining to VO that are not more than three years old, we can confirm that the current VO trend is steering towards a low-cost, high accuracy model that encourages applications on low-powered hardware such as embedded computers. Although the availability of datasets is promoting the proposal of new VO algorithms, we have also observed a current shortage of real-world VO applications, especially in autonomous road vehicles. Many proposed methods stop short of a practical implementation, and only evaluated their algorithms on datasets. We have observed that results from a dataset evaluation often deviates from a complete indication of how the algorithm will perform in a local environment, thereby highlighting our necessity for a practical VO application. We have deduced from our observations that a couple of factors could attribute to this issue. Firstly, the attainability of test beds for autonomous vehicles is often associated with high costs and complex local legislation, as it usually involves the purchase and retrofitting of an actual road vehicle, especially when considering the higher probability of accidents when new algorithms are tested. On the other hand, while embedded computers are now capable of efficient parallel processing, they are still often unable to provide the necessary computing performance required for visual navigation on the test bed, especially when we compare them against workstation-class GPUs that are typically used to evaluate newly proposed algorithms. Nonetheless, forthcoming high-performance mobile computers and an increasing public recognition towards autonomous vehicles will undoubtedly encourage practical applications of visual odometry in the near future."}]
Analysis of the hands in egocentric vision: A survey
T HE hands are of primary importance for human beings, as they allow us to interact with objects and environments, com- municate with other people, and perform activities of daily living (ADLs) such as eating, bathing, and dressing. It is not a surprise that in individuals with impaired or reduced hand functionality (e.g., after a stroke or cervical spinal cord injury – cSCI) the top recovery priority is to regain the function of the hands [1]. Given their importance, computer vision researchers have tried to analyze the hands from multiple perspectives: localizing them in the images [2], inferring the types of actions they are involved in
In this paper we showed how hand-related information can be retrieved and used in egocentric vision. We summarized the existing literature into three macro-areas, identifying the most prominent approaches for hand localization (e.g., hand detection, segmentation, pose estimation, etc.), interpretation (grasp analysis, gesture recognition, action and activity recognition), as well as the FPV applications for building real-world solutions. We believe that a comprehensive taxonomy and an updated framework of hand-based methods in FPV may serve as guidelines for the novel approaches proposed in this field by helping to identify gaps and standardize terminology.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Analysis of the hands in egocentric vision: A survey* and the introduction:T HE hands are of primary importance for human beings, as they allow us to interact with objects and environments, com- municate with other people, and perform activities of daily living (ADLs) such as eating, bathing, and dressing. It is not a surprise that in individuals with impaired or reduced hand functionality (e.g., after a stroke or cervical spinal cord injury – cSCI) the top recovery priority is to regain the function of the hands [1]. Given their importance, computer vision researchers have tried to analyze the hands from multiple perspectives: localizing them in the images [2], inferring the types of actions they are involved in"},{"role":"assistant","content":"Conclusion: In this paper we showed how hand-related information can be retrieved and used in egocentric vision. We summarized the existing literature into three macro-areas, identifying the most prominent approaches for hand localization (e.g., hand detection, segmentation, pose estimation, etc.), interpretation (grasp analysis, gesture recognition, action and activity recognition), as well as the FPV applications for building real-world solutions. We believe that a comprehensive taxonomy and an updated framework of hand-based methods in FPV may serve as guidelines for the novel approaches proposed in this field by helping to identify gaps and standardize terminology."}]
A RCHITECTURE AND S ECURITY OF SCADA S YSTEMS : A R EVIEW
Critical Infrastructures (CI) are often described as the infrastructures which provide essential services and serves as the foundation for any nation’s security, economy, and healthcare systems. Cyber-Physical Systems (CPS)/ Internet of Things (IoT), are supplementing traditional CI with data-rich operations. The list of sectors under critical infrastructure varies from country to country. It generally includes sectors like agriculture, healthcare, nuclear reactor, transportation, energy sector, civil and chemical engineering, water plants, research etc. as depicted in Fig. 1. Supervisory Control and Data Acquisition (SCADA) systems, an Industrial Control Systems(ICS), have a pivotal role in managing and controlling of the CI. SCADA systems control and monitor geographically distributed assets. Historically, SCADA frameworks were limited to power transmission, gas conveyance, and water appropriation control frameworks. Advancements in technology have led to SCADA being deployed in steel making, chemical processing industries, telecommunications, experimental and manufacturing facilities [ 1 ]. With Industries 4.0 / Industrial Internet of Things (IIoT) evolution, modern SCADA systems have adopted CPS/ IoT, cloud technology, big data analytics, Artificial intelligence (AI) and machine learning. Integration of these technologies has significantly improved interoperability, ease the maintenance and decreased the infrastructure cost. Therefore, leading to a near real-time environment.
SCADA systems have evolved into sophisticated complex open systems based on advanced technology systems connected to the Internet. This medium has lead the SCADA system vulnerable to attackers. Over the period, many attacks on SCADA industrial control system have been reported. The impact and severity of these attacks varied. The smooth and genuine operation of SCADA framework is one of the key concern for the enterprises, because the consequences of break down of SCADA system may range from financial misfortune to natural harm to loss of human life.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A RCHITECTURE AND S ECURITY OF SCADA S YSTEMS : A R EVIEW* and the introduction:Critical Infrastructures (CI) are often described as the infrastructures which provide essential services and serves as the foundation for any nation’s security, economy, and healthcare systems. Cyber-Physical Systems (CPS)/ Internet of Things (IoT), are supplementing traditional CI with data-rich operations. The list of sectors under critical infrastructure varies from country to country. It generally includes sectors like agriculture, healthcare, nuclear reactor, transportation, energy sector, civil and chemical engineering, water plants, research etc. as depicted in Fig. 1. Supervisory Control and Data Acquisition (SCADA) systems, an Industrial Control Systems(ICS), have a pivotal role in managing and controlling of the CI. SCADA systems control and monitor geographically distributed assets. Historically, SCADA frameworks were limited to power transmission, gas conveyance, and water appropriation control frameworks. Advancements in technology have led to SCADA being deployed in steel making, chemical processing industries, telecommunications, experimental and manufacturing facilities [ 1 ]. With Industries 4.0 / Industrial Internet of Things (IIoT) evolution, modern SCADA systems have adopted CPS/ IoT, cloud technology, big data analytics, Artificial intelligence (AI) and machine learning. Integration of these technologies has significantly improved interoperability, ease the maintenance and decreased the infrastructure cost. Therefore, leading to a near real-time environment."},{"role":"assistant","content":"Conclusion: SCADA systems have evolved into sophisticated complex open systems based on advanced technology systems connected to the Internet. This medium has lead the SCADA system vulnerable to attackers. Over the period, many attacks on SCADA industrial control system have been reported. The impact and severity of these attacks varied. The smooth and genuine operation of SCADA framework is one of the key concern for the enterprises, because the consequences of break down of SCADA system may range from financial misfortune to natural harm to loss of human life."}]
System Log Parsing: A Survey
W ITH the proliferation of the Internet of Things (IoT), Cloud/Edge computing, Industry 4.0, and Fifth- generation mobile networks (5G), modern computing and communication systems commonly incorporate a large va- riety of (distributed) components to provide diversified services with guaranteed performance [1]. Consequently, they have become increasingly complex and burdensome to manage. System administrators traditionally resort to runtime analysis such as code instrumentation and profiling for execution monitoring and problem diagnosis, but these techniques are non-trivial to configure and can incur non- negligible overhead in production environment [2]. Alter- natively, many research endeavors seek to explore system logs to accomplish the same tasks in a much less intru- sive manner. Indeed, since the advent of the BSD Syslog protocol [3] in the 1980s, information and communications technology (ICT) systems have widely employed log files to keep track of the execution states and system events at runtime. Log files usually contain rich runtime information that system administrators and domain experts can leverage to perform advanced analytics and are thus deemed a fun- damental building block for the development, maintenance, and troubleshooting of the modern systems [4].
This paper aims at providing a comprehensive survey of log parsers. According to their log data classification and template extraction methods, we exhaustively investigate existing solutions and organize them in an easily-accessible taxonomy. Then we systematically analyze their perfor- mance metrics (accuracy and parsing time) and operational features (parsing mode, parameter tuning, and preprocess- ing), extracting a consistent set of benchmark results on the most prevalent open-source solutions. We believe this survey provides a reasoned first-hand account on the whole solution space of log parsing. This work can also help prac- titioners select the most appropriate open-source solutions for their needs. We also envision the future directions to promote the continued development of novel accurate and scalable parsing techniques that may befit the evolution of log analysis requirements in modern ICT systems.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*System Log Parsing: A Survey* and the introduction:W ITH the proliferation of the Internet of Things (IoT), Cloud/Edge computing, Industry 4.0, and Fifth- generation mobile networks (5G), modern computing and communication systems commonly incorporate a large va- riety of (distributed) components to provide diversified services with guaranteed performance [1]. Consequently, they have become increasingly complex and burdensome to manage. System administrators traditionally resort to runtime analysis such as code instrumentation and profiling for execution monitoring and problem diagnosis, but these techniques are non-trivial to configure and can incur non- negligible overhead in production environment [2]. Alter- natively, many research endeavors seek to explore system logs to accomplish the same tasks in a much less intru- sive manner. Indeed, since the advent of the BSD Syslog protocol [3] in the 1980s, information and communications technology (ICT) systems have widely employed log files to keep track of the execution states and system events at runtime. Log files usually contain rich runtime information that system administrators and domain experts can leverage to perform advanced analytics and are thus deemed a fun- damental building block for the development, maintenance, and troubleshooting of the modern systems [4]."},{"role":"assistant","content":"Conclusion: This paper aims at providing a comprehensive survey of log parsers. According to their log data classification and template extraction methods, we exhaustively investigate existing solutions and organize them in an easily-accessible taxonomy. Then we systematically analyze their perfor- mance metrics (accuracy and parsing time) and operational features (parsing mode, parameter tuning, and preprocess- ing), extracting a consistent set of benchmark results on the most prevalent open-source solutions. We believe this survey provides a reasoned first-hand account on the whole solution space of log parsing. This work can also help prac- titioners select the most appropriate open-source solutions for their needs. We also envision the future directions to promote the continued development of novel accurate and scalable parsing techniques that may befit the evolution of log analysis requirements in modern ICT systems."}]
A Survey on Natural Language Processing for Fake News Detection
Automatic fake news detection is the task of as- sessing the truthfulness of claims in news. This is a new, but critical NLP problem because both tra- ditional news media and social media have huge social-political impacts on every individual in the society. For example, exposure to fake news can cause attitudes of inefficacy, alienation, and cyni- cism toward certain political candidates ( Balmas , 2014 ). The worst part of the spread of fake news is that sometimes it does link to offline violent events that threaten the public safety (e.g., the PizzaGate ( Kang and Goldman , 2016 )). Detect- ing fake news is of crucial importance to the NLP community, as it also creates broader impacts on how technologies can facilitate the verification of the veracity of claims while educating the general public.
We described and compared previous datasets and proposed new requirements for future datasets; 1.Easy to make from raw data in internets, 2.Have enough classes of truthfulness, 3.Quote claims or articles from different speakers or publishers. Be- sides, We compared the accuracy of many previ- ous experiments and made some challenging task to our future fake news detection model; 1. More textual content-based method on multi-class fake news detection based on Natural Language Pro- cessing should be developed for realizing reliable detection. 2. We need a more logical explana- tion for fake news characteristics 3. There should be a limitation in language-based fake news de- tection in the case that there are not enough lin- guistic differences to improve detection accuracy to very high rate so that we should extend the way of verification with evidence as the content-based method. 4. Note that hand-crafted features ex- traction will be replaced by neural network models improvement.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Natural Language Processing for Fake News Detection* and the introduction:Automatic fake news detection is the task of as- sessing the truthfulness of claims in news. This is a new, but critical NLP problem because both tra- ditional news media and social media have huge social-political impacts on every individual in the society. For example, exposure to fake news can cause attitudes of inefficacy, alienation, and cyni- cism toward certain political candidates ( Balmas , 2014 ). The worst part of the spread of fake news is that sometimes it does link to offline violent events that threaten the public safety (e.g., the PizzaGate ( Kang and Goldman , 2016 )). Detect- ing fake news is of crucial importance to the NLP community, as it also creates broader impacts on how technologies can facilitate the verification of the veracity of claims while educating the general public."},{"role":"assistant","content":"Conclusion: We described and compared previous datasets and proposed new requirements for future datasets; 1.Easy to make from raw data in internets, 2.Have enough classes of truthfulness, 3.Quote claims or articles from different speakers or publishers. Be- sides, We compared the accuracy of many previ- ous experiments and made some challenging task to our future fake news detection model; 1. More textual content-based method on multi-class fake news detection based on Natural Language Pro- cessing should be developed for realizing reliable detection. 2. We need a more logical explana- tion for fake news characteristics 3. There should be a limitation in language-based fake news de- tection in the case that there are not enough lin- guistic differences to improve detection accuracy to very high rate so that we should extend the way of verification with evidence as the content-based method. 4. Note that hand-crafted features ex- traction will be replaced by neural network models improvement."}]
Recent Neural Methods on Slot Filling and Intent Classification for Task-Oriented Dialogue Systems: A Survey
The ability to understand user’s requests is essential to develop effective task-oriented dialogue systems. For example, in the utterance ” I want to listen to Hey Jude by The Beatles ”, a dialogue system should correctly identify that the user’s intention is to give a command to play a song, and that Hey Jude and The Beatles are, respectively, the song’s title and the artist name that the user would like to listen. In a dialogue system this information is typically represented through a semantic-frame structure ( Tur and De Mori, 2011 ), and extracting such representation involves two tasks: identifying the correct frame (i.e. intent classification (IC) ) and filling the correct value for the slots of the frame (i.e. slot filling (SF) ).
We have surveyed recent neural-based models applied to SF and IC in the context of task-oriented di- alogue systems. We examined three approaches, i.e. independent , joint , and transfer learning based models. Joint models exploiting the relation between SF and IC simultaneously shown relatively bet- ter performance than independent models. Empirical results have shown that most joint models nearly ”solve” widely used datasets, ATIS and SNIPS, given sufficient in-domain training data . Nevertheless, there are still several challenges related to SF and IC, especially improving the scalability of the model to new domains and languages when limited labeled data are available.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Recent Neural Methods on Slot Filling and Intent Classification for Task-Oriented Dialogue Systems: A Survey* and the introduction:The ability to understand user’s requests is essential to develop effective task-oriented dialogue systems. For example, in the utterance ” I want to listen to Hey Jude by The Beatles ”, a dialogue system should correctly identify that the user’s intention is to give a command to play a song, and that Hey Jude and The Beatles are, respectively, the song’s title and the artist name that the user would like to listen. In a dialogue system this information is typically represented through a semantic-frame structure ( Tur and De Mori, 2011 ), and extracting such representation involves two tasks: identifying the correct frame (i.e. intent classification (IC) ) and filling the correct value for the slots of the frame (i.e. slot filling (SF) )."},{"role":"assistant","content":"Conclusion: We have surveyed recent neural-based models applied to SF and IC in the context of task-oriented di- alogue systems. We examined three approaches, i.e. independent , joint , and transfer learning based models. Joint models exploiting the relation between SF and IC simultaneously shown relatively bet- ter performance than independent models. Empirical results have shown that most joint models nearly ”solve” widely used datasets, ATIS and SNIPS, given sufficient in-domain training data . Nevertheless, there are still several challenges related to SF and IC, especially improving the scalability of the model to new domains and languages when limited labeled data are available."}]
An introduction to optimization under uncertainty – A short survey
Optimization is a vast field and is arguably one of the most useful tools for scientists and engineers. With applications in almost any industry, from operations research to climate analysis to process control to robotics, the need to further our understanding of optimization and develop efficient algorithms to solve optimization problems is clear. The mathematical structure and geometric interpretations of optimization make it an exciting academic research area. It is interesting for its own sake. So it is fortunate that optimization also happens to be extremely useful in solving real problems and developing real technology. Another fortunate feature of optimization is that it has a rich history of remarkable leaps in understanding. One discovery of particular importance was the realization that the distinction between complex and easy optimization problems does not hinge on linearity, but rather, convexity [6]. Rockafellar published this historical paper in 1993. The date of his seminal discovery is interesting to note when put into context. Humans first stepped on the moon in 1969. So it wasn’t until over 20 years later that we realized the fundamental importance of convexity in optimization problems. Nowadays, we are consistently sending rockets to space and back, which would not be possible without numerical optimization, in particular, convex optimization [25]. How many more discoveries of the same magnitude as Rockafellar’s are left to make? Currently, it seems that the theory behind convex optimization is nearly complete. So what developments are necessary to further our understanding of optimization and increase its utility? Optimization that includes uncertainty is a research frontier that is ripe for research.
The field of optimization has made tremendous strides in progress over the last half-century. Theoretical de- velopments have improved our understanding of the structure of optimization problems. Advancements in computer technology continue to make optimization more applicable to everyday engineering. With the inclu- sion of uncertainty into the optimization problems, the resulting solutions are more robust to the inevitable discrepancies between the modeled parameters and the real system. However, simply assigning probability distributions to unknown parameters often assumes more knowledge than what one really has. By accepting the lack of knowledge about certain parameters, the problem then becomes one of epistemic uncertainty. Using epistemic models often results in more realistic models, however, it complicates the problem significantly. Incor- porating epistemic uncertainty can immediately make a tractable optimization problem intractable. The most popular model for epistemic uncertainty is the bounded set. It is apparent that the theory of optimization under epistemic uncertainty with more advanced models than the bounded set needs to be further developed. There is a disproportionate amount of literature around advanced epistemic uncertainty modeling and the associated optimization problems. A unified mathematical framework for optimizing under epistemic uncertainty beyond the bounded set is of paramount importance and is a research direction with many opportunities. Furthermore, the need to develop efficient algorithms to solve optimization under epistemic uncertainty will naturally follow once the theory is in place.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*An introduction to optimization under uncertainty – A short survey* and the introduction:Optimization is a vast field and is arguably one of the most useful tools for scientists and engineers. With applications in almost any industry, from operations research to climate analysis to process control to robotics, the need to further our understanding of optimization and develop efficient algorithms to solve optimization problems is clear. The mathematical structure and geometric interpretations of optimization make it an exciting academic research area. It is interesting for its own sake. So it is fortunate that optimization also happens to be extremely useful in solving real problems and developing real technology. Another fortunate feature of optimization is that it has a rich history of remarkable leaps in understanding. One discovery of particular importance was the realization that the distinction between complex and easy optimization problems does not hinge on linearity, but rather, convexity [6]. Rockafellar published this historical paper in 1993. The date of his seminal discovery is interesting to note when put into context. Humans first stepped on the moon in 1969. So it wasn’t until over 20 years later that we realized the fundamental importance of convexity in optimization problems. Nowadays, we are consistently sending rockets to space and back, which would not be possible without numerical optimization, in particular, convex optimization [25]. How many more discoveries of the same magnitude as Rockafellar’s are left to make? Currently, it seems that the theory behind convex optimization is nearly complete. So what developments are necessary to further our understanding of optimization and increase its utility? Optimization that includes uncertainty is a research frontier that is ripe for research."},{"role":"assistant","content":"Conclusion: The field of optimization has made tremendous strides in progress over the last half-century. Theoretical de- velopments have improved our understanding of the structure of optimization problems. Advancements in computer technology continue to make optimization more applicable to everyday engineering. With the inclu- sion of uncertainty into the optimization problems, the resulting solutions are more robust to the inevitable discrepancies between the modeled parameters and the real system. However, simply assigning probability distributions to unknown parameters often assumes more knowledge than what one really has. By accepting the lack of knowledge about certain parameters, the problem then becomes one of epistemic uncertainty. Using epistemic models often results in more realistic models, however, it complicates the problem significantly. Incor- porating epistemic uncertainty can immediately make a tractable optimization problem intractable. The most popular model for epistemic uncertainty is the bounded set. It is apparent that the theory of optimization under epistemic uncertainty with more advanced models than the bounded set needs to be further developed. There is a disproportionate amount of literature around advanced epistemic uncertainty modeling and the associated optimization problems. A unified mathematical framework for optimizing under epistemic uncertainty beyond the bounded set is of paramount importance and is a research direction with many opportunities. Furthermore, the need to develop efficient algorithms to solve optimization under epistemic uncertainty will naturally follow once the theory is in place."}]
Graph-based Knowledge Distillation: A survey and experimental evaluation
G RAPH data [ 1 ], which represents the relationship be- tween objects, is an important data type used in vari- ous real-world scenarios such as user recommendation [ 2 ], drug discovery [ 3 ], traffic forecasting [ 4 ], point cloud clas- sification [ 5 ], and chip design [ 6 ]. Unlike structured data in Euclidean space, graph data has a complex structure and contains rich information. To learn vectorized repre- sentations with sufficient information from complex graphs, researchers are applying deep learning methods to graphs. Drawing on the idea of convolutional neural networks (CNNs) [ 7 ], graph neural networks (GNNs) [ 8 ] have been proposed, and they have been effectively applied in tasks such as node classification [ 9 ], link prediction [ 10 ], and graph classification [ 11 ].
Based on the basic concept of graph data and knowledge distillation, the method of Graph-based Knowledge Distilla- tion is thoroughly combed in this paper. Firstly, on the basis of the design characteristics of the Graph-based Knowledge Distillation algorithm, it can be divided into three cate- gories: Graph-based Knowledge Distillation for deep neural networks (DKD), Graph-based Knowledge Distillation for graph neural networks (GKD), and Self-Knowledge Dis- tillation based Graph-based Knowledge Distillation (SKD). Secondly, it can be further subdivided into the output layer, middle layer, and constructed graph method based on the position of knowledge distillation. Then, the algorithm performance of the mainstream Graph-based Knowledge Distillation method is compared experimentally. In addition, we summarize the critical application scenarios of Graph- based Knowledge Distillation in various fields. Finally, we conclude and prospect the research direction of Graph- based Knowledge Distillation learning recently. It is hoped that this paper can provide some insights for the researchers of graph representation learning and knowledge distillation to promote the sustainable development of this field.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Graph-based Knowledge Distillation: A survey and experimental evaluation* and the introduction:G RAPH data [ 1 ], which represents the relationship be- tween objects, is an important data type used in vari- ous real-world scenarios such as user recommendation [ 2 ], drug discovery [ 3 ], traffic forecasting [ 4 ], point cloud clas- sification [ 5 ], and chip design [ 6 ]. Unlike structured data in Euclidean space, graph data has a complex structure and contains rich information. To learn vectorized repre- sentations with sufficient information from complex graphs, researchers are applying deep learning methods to graphs. Drawing on the idea of convolutional neural networks (CNNs) [ 7 ], graph neural networks (GNNs) [ 8 ] have been proposed, and they have been effectively applied in tasks such as node classification [ 9 ], link prediction [ 10 ], and graph classification [ 11 ]."},{"role":"assistant","content":"Conclusion: Based on the basic concept of graph data and knowledge distillation, the method of Graph-based Knowledge Distilla- tion is thoroughly combed in this paper. Firstly, on the basis of the design characteristics of the Graph-based Knowledge Distillation algorithm, it can be divided into three cate- gories: Graph-based Knowledge Distillation for deep neural networks (DKD), Graph-based Knowledge Distillation for graph neural networks (GKD), and Self-Knowledge Dis- tillation based Graph-based Knowledge Distillation (SKD). Secondly, it can be further subdivided into the output layer, middle layer, and constructed graph method based on the position of knowledge distillation. Then, the algorithm performance of the mainstream Graph-based Knowledge Distillation method is compared experimentally. In addition, we summarize the critical application scenarios of Graph- based Knowledge Distillation in various fields. Finally, we conclude and prospect the research direction of Graph- based Knowledge Distillation learning recently. It is hoped that this paper can provide some insights for the researchers of graph representation learning and knowledge distillation to promote the sustainable development of this field."}]
Understand and Improve Contrastive Learning Methods for Visual Representation: A Review
Traditional supervised learning methods are hitting a bottleneck [ 33 ] because of their dependency on expensive manually labeled data and their weaknesses such as limited generalization ability and vulnerability to adversarial attacks [ 17 , 51 , 33 ]. A promising alternative, self-supervised learning, as a type of unsupervised learning, has gained popularity because of its potential to learn effective data representations without manual labeling. Among self-supervised learning algorithms, contrastive learning has achieved state-of-the-art performance in several fields of research, including but not limited to computer vision [ 7 , 20 , 39 , 53 ], natural language processing [ 29 , 30 ], and biomedical image processing [6, 3, 12, 1].
This review analyzed the key components in contrastive learning methods. First, we introduce that pretext invariant learning methods [ 35 ] outperform pretext covariant learning methods because the former preserves the mutual visual semantics under image transformation. After that, based on pretext invariant learning, contrastive learning methods e.g. SimCLR [ 7 ] used contrastive loss to achieve supervised-level representation learning performance, with a dependency on 1) large numbers of negative examples for contrasting; 2) a specific composition of data augmentation operations. In the following, we discuss works that tackle these two dependencies. For negative examples, we introduced MoCHi [ 26 ] as a possible method to understand and improve the selection of negative examples. For the specific composition of data augmentations, we discussed people’s criticism [44] about it and introduced two possible solutions: a general representation evaluation method [ 46 ] and a covariant preserved pretext invariant learning method [ 49 ]. With all these components being discussed, we hope this literature review could help people understand the contrastive learning and develop novel more methods to improve contrastive learning.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Understand and Improve Contrastive Learning Methods for Visual Representation: A Review* and the introduction:Traditional supervised learning methods are hitting a bottleneck [ 33 ] because of their dependency on expensive manually labeled data and their weaknesses such as limited generalization ability and vulnerability to adversarial attacks [ 17 , 51 , 33 ]. A promising alternative, self-supervised learning, as a type of unsupervised learning, has gained popularity because of its potential to learn effective data representations without manual labeling. Among self-supervised learning algorithms, contrastive learning has achieved state-of-the-art performance in several fields of research, including but not limited to computer vision [ 7 , 20 , 39 , 53 ], natural language processing [ 29 , 30 ], and biomedical image processing [6, 3, 12, 1]."},{"role":"assistant","content":"Conclusion: This review analyzed the key components in contrastive learning methods. First, we introduce that pretext invariant learning methods [ 35 ] outperform pretext covariant learning methods because the former preserves the mutual visual semantics under image transformation. After that, based on pretext invariant learning, contrastive learning methods e.g. SimCLR [ 7 ] used contrastive loss to achieve supervised-level representation learning performance, with a dependency on 1) large numbers of negative examples for contrasting; 2) a specific composition of data augmentation operations. In the following, we discuss works that tackle these two dependencies. For negative examples, we introduced MoCHi [ 26 ] as a possible method to understand and improve the selection of negative examples. For the specific composition of data augmentations, we discussed people’s criticism [44] about it and introduced two possible solutions: a general representation evaluation method [ 46 ] and a covariant preserved pretext invariant learning method [ 49 ]. With all these components being discussed, we hope this literature review could help people understand the contrastive learning and develop novel more methods to improve contrastive learning."}]
Towards Automatic Threat Detection: A Survey of Advances of Deep Learning within X-ray Security Imaging
X-ray security screening is one of the most widely used security measures for main- taining airport and transport security, whereby manual screening by human oper- ators plays the vital role. Despite the fact that experience and knowledge are the key factors for confident detection, external variables such as emotional exhaustion and job satisfaction adversely impact the manual screening [ 1 – 5 ].
This paper taxonomises conventional machine and modern deep algorithms utilised within X-ray security imaging. Traditional approaches are sub-categorised based on computer vision tasks such as image enhancement, threat image projection, ob- ject segmentation, feature extraction, object classification, and detection. Review of the deep learning approaches includes classification detection, segmentation and unsupervised anomaly detection algorithms applied within the field. The discus- sion finally provides the strengths and weaknesses of the current techniques, open challenges and envisions the future directions of the field.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Towards Automatic Threat Detection: A Survey of Advances of Deep Learning within X-ray Security Imaging* and the introduction:X-ray security screening is one of the most widely used security measures for main- taining airport and transport security, whereby manual screening by human oper- ators plays the vital role. Despite the fact that experience and knowledge are the key factors for confident detection, external variables such as emotional exhaustion and job satisfaction adversely impact the manual screening [ 1 – 5 ]."},{"role":"assistant","content":"Conclusion: This paper taxonomises conventional machine and modern deep algorithms utilised within X-ray security imaging. Traditional approaches are sub-categorised based on computer vision tasks such as image enhancement, threat image projection, ob- ject segmentation, feature extraction, object classification, and detection. Review of the deep learning approaches includes classification detection, segmentation and unsupervised anomaly detection algorithms applied within the field. The discus- sion finally provides the strengths and weaknesses of the current techniques, open challenges and envisions the future directions of the field."}]
U NSUPERVISED R EPRESENTATION L EARNING IN D EEP R EINFORCEMENT L EARNING : A R EVIEW
Dynamic systems pervade our daily life. A pendulum swinging, a robot picking up objects, the flow of water in a river, the trend of economic costs, the spread of disease and the prediction of climate change are all examples of dynamic systems. Understanding how these systems evolve by discovering the governing equations, how they react to inputs and how they can be controlled are crucial aspects of science, and engineering [1].
This paper reviewed the most important and newest research trends on unsupervised SRL in DRL. From the literature, we identified four major classes of SRL methods: observation reconstruction, learning of latent MDP models, using priors, and contrastive learning. Moreover, we study how these methods are combined to develop more complex approaches. Eventually, we extended the review to include the advanced methods and applications of SRL.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*U NSUPERVISED R EPRESENTATION L EARNING IN D EEP R EINFORCEMENT L EARNING : A R EVIEW* and the introduction:Dynamic systems pervade our daily life. A pendulum swinging, a robot picking up objects, the flow of water in a river, the trend of economic costs, the spread of disease and the prediction of climate change are all examples of dynamic systems. Understanding how these systems evolve by discovering the governing equations, how they react to inputs and how they can be controlled are crucial aspects of science, and engineering [1]."},{"role":"assistant","content":"Conclusion: This paper reviewed the most important and newest research trends on unsupervised SRL in DRL. From the literature, we identified four major classes of SRL methods: observation reconstruction, learning of latent MDP models, using priors, and contrastive learning. Moreover, we study how these methods are combined to develop more complex approaches. Eventually, we extended the review to include the advanced methods and applications of SRL."}]
A Survey on Influence Maximization: From an ML-Based Combinatorial Optimization
With the rapid development of the mobile internet, social media platforms and content sharing platforms are rapidly emerging. Compared with newspapers, radio, television, and other traditional media, mobile Internet-based media has faster speed, lower cost, and more diverse forms. Thus, the commercial social giants represented by Weibo, Facebook, TikTok, and Twitter were quickly born and strengthened in the last two decades. All these applications are based on the Online Social Network (OSN), which is a structure composed of individuals and social interactions between them. It acts as a medium for spreading information, ideas or opinions among users through the effect of “word-of-mouth”. For example, Facebook has over 1.52 billion daily active users and Twitter has 321 million monthly active users. The phenomenon of information diffusion in social networks has attracted the attention of scholars, businessmen and politicians because of its huge value in many aspects. These values can be used in many specific applications, such as virus marketing [ 52 ] [ 155 ], network monitoring [ 105 ], rumor blocking [ 33 ] [ 67 ], and social recommendation [ 209 ]. As a result, they jointly promote research on the Influence Maximization (IM) problem.
In this paper, we present a comprehensive survey on the IM problem from an ML-based com- binatorial optimization perspective. In order to compare ML-based algorithms with traditional approaches, we first summarize three kinds of traditional approaches: simulation-based, proxy- based, and sampling-based approaches, and point out their respective defects. Then, we survey two kinds of commonly used framework that can be used to solve combinatorial optimization problems: Ptr-Nets Based RL framework and GNNs Based RL Framework. We discuss their basic structures and characteristics in detail, and focus on summarizing the principles and technologies related to the IM problem. This paves the way for solving the IM problem by using ML-based methods. Furthermore, we cover almost all the ML-based IM algorithms up to now, focusing on several pioneering DRL solution frameworks, and summarizing the variant IM problems and their corresponding ML-based algorithms. Here, we elaborately explain their working principle and how to break through the bottleneck of traditional algorithms. Finally, we summarize the existing difficulties and future research directions in this field from two aspects.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Influence Maximization: From an ML-Based Combinatorial Optimization* and the introduction:With the rapid development of the mobile internet, social media platforms and content sharing platforms are rapidly emerging. Compared with newspapers, radio, television, and other traditional media, mobile Internet-based media has faster speed, lower cost, and more diverse forms. Thus, the commercial social giants represented by Weibo, Facebook, TikTok, and Twitter were quickly born and strengthened in the last two decades. All these applications are based on the Online Social Network (OSN), which is a structure composed of individuals and social interactions between them. It acts as a medium for spreading information, ideas or opinions among users through the effect of “word-of-mouth”. For example, Facebook has over 1.52 billion daily active users and Twitter has 321 million monthly active users. The phenomenon of information diffusion in social networks has attracted the attention of scholars, businessmen and politicians because of its huge value in many aspects. These values can be used in many specific applications, such as virus marketing [ 52 ] [ 155 ], network monitoring [ 105 ], rumor blocking [ 33 ] [ 67 ], and social recommendation [ 209 ]. As a result, they jointly promote research on the Influence Maximization (IM) problem."},{"role":"assistant","content":"Conclusion: In this paper, we present a comprehensive survey on the IM problem from an ML-based com- binatorial optimization perspective. In order to compare ML-based algorithms with traditional approaches, we first summarize three kinds of traditional approaches: simulation-based, proxy- based, and sampling-based approaches, and point out their respective defects. Then, we survey two kinds of commonly used framework that can be used to solve combinatorial optimization problems: Ptr-Nets Based RL framework and GNNs Based RL Framework. We discuss their basic structures and characteristics in detail, and focus on summarizing the principles and technologies related to the IM problem. This paves the way for solving the IM problem by using ML-based methods. Furthermore, we cover almost all the ML-based IM algorithms up to now, focusing on several pioneering DRL solution frameworks, and summarizing the variant IM problems and their corresponding ML-based algorithms. Here, we elaborately explain their working principle and how to break through the bottleneck of traditional algorithms. Finally, we summarize the existing difficulties and future research directions in this field from two aspects."}]
Emotion Detection in Text: a Review
Emotion detection in computational linguistics is the process of identifying discrete emotion expressed in text. Emotion analysis can be viewed as a natural evolution of sentiment analysis and its more fine- grained model. However, as we observe in this paper, this field still has a long way to go before matching the success and ubiquity of sentiment analysis.
In this paper, we reviewed the current state of emotion detection in textual data based on the available work in the literature. While many successful methodology and resources was introduces for sentiment analysis in recent years, researchers, by understanding the importance of more fine-grained affective information in decision making, turned to emotion detection in order to distinguish between different negative or positive emotions. In addition, having large amount of textual data with the rise of social media in past couple of decades, and therefore the availability of vast self expression text about any major or minor event, idea, or product, points to a great potential to change how entities and organizations can use these information as a basis for their future decision making processes.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Emotion Detection in Text: a Review* and the introduction:Emotion detection in computational linguistics is the process of identifying discrete emotion expressed in text. Emotion analysis can be viewed as a natural evolution of sentiment analysis and its more fine- grained model. However, as we observe in this paper, this field still has a long way to go before matching the success and ubiquity of sentiment analysis."},{"role":"assistant","content":"Conclusion: In this paper, we reviewed the current state of emotion detection in textual data based on the available work in the literature. While many successful methodology and resources was introduces for sentiment analysis in recent years, researchers, by understanding the importance of more fine-grained affective information in decision making, turned to emotion detection in order to distinguish between different negative or positive emotions. In addition, having large amount of textual data with the rise of social media in past couple of decades, and therefore the availability of vast self expression text about any major or minor event, idea, or product, points to a great potential to change how entities and organizations can use these information as a basis for their future decision making processes."}]
A Review of Intrusion Detection Systems and Their Evaluation in the IoT
An IDS is in essence a monitor placed on a device and/or network that analyses incoming messages, to detect attacks and/or unwanted traffic, and when paired with an intrusion prevention system can be used to stop attacks before they affect the system. They are trained using system behaviour data or from existing attack databases, and use these patterns to make the detection. They are widely deployed in a variety of systems and can often be considered as a first line of defence against intruders. Deployment strategies differ; however, they can broadly be classified into Host Based (HIDS) and Network Based (NIDS). A HIDS monitors activities on the device itself such as system calls or shell commands to discover unauthorised behaviours or accesses. They are normally very fine grained traces of behaviour on a single device. A NIDS on the other hand looks at the network data to determine the likelihood of intrusion. This approach is more flexible when looking at large systems of devices and will have a smaller overhead on the devices themselves; however, it has less granularity. To optimise detection it is sometime desirable to mix the approaches and have both implemented on a system. This technique is often referred to as collaborative intrusion detection (CID). Whilst this approach theoretically provides a wider coverage it also needs a very structured architecture, and is therefore much more complex to implement.
In this work we have extensively analysed and summarised the current state of the art in intrusion detection in the IoT. We have reviewed and summaries 51 proposed tools and technologies and characterised them. Our analysis has shown several new insights into how research in this field is developing, and potential gaps for future works. We have identified a major gap for the reproducibility of the tools evaluation and also discussed shortcomings in the current evaluation process, namely 1) single focus on detection accuracy; 2) lack of unified evaluation methodology; and 3) disregard for evaluation regarding usability of these tools. Our findings indicate that perhaps the very large numbers of papers in this area is in part due to the inability to build upon the software and the lack of research into tools that can be monitored and supervisable. We see a need for a shift to focus away from development of new techniques to a focus on making these tools that could scale and be usable across IoT deployments. The analysis shows that this is in part due to how the tools have been evaluated and we suggest the usage of metrics from Etalle, S (2019) [ 30 ] as a better set of metrics against which to decide the suitability of an IDS for IoT systems. As a final consideration we see a thorough comparative evaluation by these tools through this evaluation a beneficial and necessary task, although we reserve this for future work as there currently is not enough access to the proposed tools.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Review of Intrusion Detection Systems and Their Evaluation in the IoT* and the introduction:An IDS is in essence a monitor placed on a device and/or network that analyses incoming messages, to detect attacks and/or unwanted traffic, and when paired with an intrusion prevention system can be used to stop attacks before they affect the system. They are trained using system behaviour data or from existing attack databases, and use these patterns to make the detection. They are widely deployed in a variety of systems and can often be considered as a first line of defence against intruders. Deployment strategies differ; however, they can broadly be classified into Host Based (HIDS) and Network Based (NIDS). A HIDS monitors activities on the device itself such as system calls or shell commands to discover unauthorised behaviours or accesses. They are normally very fine grained traces of behaviour on a single device. A NIDS on the other hand looks at the network data to determine the likelihood of intrusion. This approach is more flexible when looking at large systems of devices and will have a smaller overhead on the devices themselves; however, it has less granularity. To optimise detection it is sometime desirable to mix the approaches and have both implemented on a system. This technique is often referred to as collaborative intrusion detection (CID). Whilst this approach theoretically provides a wider coverage it also needs a very structured architecture, and is therefore much more complex to implement."},{"role":"assistant","content":"Conclusion: In this work we have extensively analysed and summarised the current state of the art in intrusion detection in the IoT. We have reviewed and summaries 51 proposed tools and technologies and characterised them. Our analysis has shown several new insights into how research in this field is developing, and potential gaps for future works. We have identified a major gap for the reproducibility of the tools evaluation and also discussed shortcomings in the current evaluation process, namely 1) single focus on detection accuracy; 2) lack of unified evaluation methodology; and 3) disregard for evaluation regarding usability of these tools. Our findings indicate that perhaps the very large numbers of papers in this area is in part due to the inability to build upon the software and the lack of research into tools that can be monitored and supervisable. We see a need for a shift to focus away from development of new techniques to a focus on making these tools that could scale and be usable across IoT deployments. The analysis shows that this is in part due to how the tools have been evaluated and we suggest the usage of metrics from Etalle, S (2019) [ 30 ] as a better set of metrics against which to decide the suitability of an IDS for IoT systems. As a final consideration we see a thorough comparative evaluation by these tools through this evaluation a beneficial and necessary task, although we reserve this for future work as there currently is not enough access to the proposed tools."}]