来自美国波士顿
更专业的论文润色机构
400-110-1310

咨询学术顾问:400-110-1310

服务时间:周一至周日,9:00-23:00

建议反馈/客户投诉:Editsprings@163.com

微信扫码二维码,添加学术顾问企业微信

您还可以点击在线咨询按钮,与人工客服直接对话,无需等待,立马接入。

在线咨询
欢迎关注“艾德思EditSprings”微信公众号

关注微信获取最新优惠和写作干货,
随时手机询价或咨询人工客服,更可实时掌握稿件
进度,加速稿件发表。

微信扫描二维码关注服务号

艾德思:人工智能高引SCI期刊专刊信息10条

论文润色 | 2019/06/19 13:56:59  | 410 次浏览

人工智能 Applied Soft Computing Special Issue on Emerging Soft Computing Methodologies in Deep Learning and Applications

全文截稿: 2019-09-30 影响因子: • 大类 : 工程技术 - 2区 • 小类 : 计算机:人工智能 - 2区 • 小类 : 计算机:跨学科应用 - 2区 网址:

Machine learning is to design and analyze algorithms that allow computers to "learn" automatically, and allows machines to establish rules from automatically analyzing data and using them to predict unknown data. Traditional machine learning approach is difficult to meet the needs of Internet of Things (IoT) only through its outdated process starting from problem definition, appropriate information collection, and ending with model development and results verification. But however, recent scenario has dramatically changed due to the development of artificial intelligence (AI) and high-speed computing performance. Therefore, deep learning is a good example that breaks the limits of machine learning through feature engineering and gives astonishingly superior performance. It makes a number of extremely complex applications possible.

Machine learning has been applied to solve complex problems in human society for years, and the success of machine learning is because of the support of computing capabilities as well as the sensing technology. An evolution of artificial intelligence and soft computing approaches will soon cause considerable impacts to the field. Search engines, image recognition, biometrics, speech and handwriting recognition, natural language processing, and even medical diagnostics and financial credit ratings are all common examples. It is clear that many challenges will be brought to publics as the artificial intelligence infiltrates into our world, and more specifically, our lives.

Deep learning has been more mature in the field of supervised learning, but other areas of machine learning have just started, especially for the areas of unsupervised learning and reinforcement learning with soft computing methodologies. Deep learning is a class of machine learning algorithms that:

use a cascade of multiple layers ofnonlinear processingunits forfeature extractionand transformation. Each successive layer uses the output from the previous layer as input.

learn insupervisedand/orunsupervisedmanners.

learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.

Due to the cascaded structure and the abstraction level of multiple representations, Deep Learning has very good performance in speech recognition and image recognition, especially when one aims to have different levels of resolution representations in signals and images with gaining automated features extracted from these. Two common models, Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), are widely used architectures in the context of deep learning. In addition to the fact that most "deep learning" technologies are built on the concept of supervised learning to construct a set of classifiers to recognize things entering an information system, "soft computing and metaheuristic algorithms" are built on the concept of unsupervised learning to find out good solutions from a solution space, which can be regarded as an infinite space. The algorithms of these two research domains are the two promising technologies of AI that has been widely and successfully used in solving many complex and large-scale problems.

However, applying deep learning to solve problems will encounter some challenges. In order to have good performance, deep learning algorithms require a large and diverse range of data, and a large number of parameters need to be tuned. Furthermore, well-trained deep learning model tend to have overfitting problems, and not easily applied in other areas. In addition, the training process of deep learning is still a black box, and researchers have a hard time understanding how they learning and how they deduce conclusions. Therefore, in order to boost performance and transparency of deep learning models and to bring them actually to a level of high practical usage in real-world applications and facilities, this special issue places a special attention i.) on the (complexity) reduction of parameters with soft computing methodologies in deep-learning models, ii.) an enhanced interpretation and reasoning methods with soft computing methodologies for explaining hidden components in deep learning models as well as for gaining a better understanding of the outputs of deep learning models (=> increasing acceptability for company experts and users) and iii) on methods for incrementally self-adapting and evolving soft computing methodologies for deep learning models, where not only weight parameters may be recursively updated, but also internal structures may be evolved and pruned on the fly based on current changes and drift intensity present in the system. Furthermore, new deep learning methods in combination with renowned, widely-used architectures, but also developed for soft computing and artificial intelligence environments where it has been not considered so far (e.g., deep learning SVMs or deep learning bio-inspired systems are hardly existing) are also warmly welcomed. There are new emerging applications and new deep learning developments of established applications of soft computing methodologies and architectures, with specific emphasis in the fields of big data, internet of things, social media data mining, web applications.

Original contributions are solicited from, but are not limited, the following topics of interest:

Methodologies,and Techniques(but not necessarily restr. to):

New methods for Soft Computing in combination with Deep Learning

New learning methods with Soft Computing concepts for established deep learning architectures and structure

Faster and more robust Soft Computing methods for learning of deep models

Complexity Reduction with Soft Computing methods and Transformation of Deep Learning Models

Evolutionary and Soft Computing-based optimization and tuning of deep learning models

Evolving and Soft Computing techniques for deep learning systems (expanding and pruning layers, components etc. on the fly)

Metaheuristics aspects and Soft Computing algorithms in deep learning for improved convergence

Hybrid learning schemes with Soft Computing (deterministic with heuristics-based, memetic)

Interpretability Aspects with Soft Computing for a better Understanding of Deep Learning Models

Soft Computing Methods for non-established deep learning models (deep SVMs, deep fuzzy models, deep clustering techniques, ...)

Real-World Applicationsof deep learning techniques such as (but not necessarily restricted to):

Cloud and Fog Computing in AI

Big Data Analysis

Context-Awareness and Intelligent Environment Application

Financial Engineering and Time Series Forecasting and Analysis

FinTech Application

Innovative Machine-Learning Applications

Intelligent E-Learning & Tutoring

Intelligent Human-Computer Interaction

IoT Application

Smart Healthcare

Social Computing

Biological Computing

Smart Living and Smart Cities

Information Security

Natural Language Processing

人工智能 Computer Speech and Language Special issue on Advances in Automatic Speaker Verification Anti-spoofing

全文截稿: 2019-09-30 影响因子: • 小类 : 计算机:人工智能 - 4区 网址:

The performance of voice biometrics systems based on automatic speaker verification (ASV) technology degrades significantly in the presence of spoofing attacks. Over the past few years considerable progress has been made in the field of ASV anti-spoofing. This includes the development of new speech corpora, common evaluation protocols and advancements in front-end feature extraction and back-end classifiers. The ASVspoof initiative was launched to promote the development of countermeasures which aim to protect ASV from spoofing attacks. ASVspoof 2015, the first edition, focused on the detection of synthetic speech created with voice conversion (VC) and text-to-speech (TTS) methods. The second edition, ASVspoof 2017, focused on the detection of replayed speech.

ASVspoof 2019, the latest edition included two sub-challenges geared towards "logical access" (LA) and "physical access" (PA) scenarios. The LA scenario relates to the detection of synthetic speech created with advanced VC and TTS methods developed by academic and non-academic organizations. The PA scenario promotes the develop of countermeasures for the detection of replayed speech signals. More than 60 academic and industrial teams participated in the ASVspoof 2019 challenge. Preliminary results indicate considerable performance improvements in terms of two evaluation metrics adopted for the challenge. The top-ranking teams applied different machine learning algorithms suitable for the discrimination of natural and spoofed speech.

This special issue will feature articles describing top-performing techniques and detailed analyses of some of the systems reported in recent years by leading anti-spoofing researchers. The special issue will also consist of an overview article which covers ASVspoof 2019 challenge results, and meta analyses. The scope of the special issue is, however, not limited to work performed using the ASVspoof challenge datasets; studies conducted with other datasets are also welcome.

Please contact atinfo@asvspoof.orgif you have any questions about the relevance of your work for this special issue.

Topics of interest include (but are not limited to):

Speaker verification anti-spoofing on ASVspoof 2019

Datasets for speaker verification anti-spoofing

Deep learning for spoofing and anti-spoofing

Joint evaluation of countermeasures and speaker verification

Evaluation methodology for speaker verification anti-spoofing

Voice conversion for spoofing speaker verification systems

Text-to-speech for spoofing speaker verification systems

Robust spoofing countermeasures

Generalized spoofing countermeasures

Audio watermarking for spoofing countermeasures

Acoustic fingerprinting for spoofing countermeasures

Knowledge-based approaches for spoofing countermeasures

Open source toolkit for speaker verification anti-spoofing

人工智能 Neural Networks Special Issue on Deep Neural Network Representation and Generative Adversarial Learning

全文截稿: 2019-09-30 影响因子: • 大类 : 工程技术 - 1区 • 小类 : 计算机:人工智能 - 2区 • 小类 : 神经科学 - 2区 网址:

Generative Adversarial Networks (GANs) have proven to be efficient systems for data generation. Their success is achieved by exploiting a minimax learning concept, which has proved to be an effective paradigm in earlier works, such as predictability minimization, in which two networks compete with each other during the learning process. One of the main advantages of GANs over other deep learning methods is their ability to generate new data from noise, as well as their ability to virtually imitate any data distribution. However, generating realistic data using GANs remains a challenge, particularly when specific features are required; for example, constraining the latent aggregate distribution space does not guarantee that the generator will produce an image with a specific attribute. On the other hand, new advancements in deep representation learning (RL) can help improve the learning process in Generative Adversarial Learning (GAL). For instance, RL can help address issues such as dataset bias and network co-adaptation, and identify a set of features that are best suited for a given task.

Despite their obvious advantages and their application to a wide range of domains, GANs have yet to overcome several challenges. They often fail to converge and are very sensitive to parameter and hyper-parameter initialization. Simultaneous learning of a generator and a discriminator network often results in overfitting. Moreover, the generator model is prone to mode collapse, which results in failure to generate data with several variations. Accordingly, new theoretical methods in deep RL and GAL are required to improve the learning process and generalization performance of GANs, as well as to yield new insights into how GANs learn data distributions.

Thisspecial issue on Deep Neural Network Representation and Generative Adversarial Learning invites researchers and practitioners to present novel contributions addressing theoretical and practical aspects of deep representation and generative adversarial learning.The special issue will feature a collection of high quality theoretical articles for improving the learning process and the generalization of generative neural networks. State-of-the-art applications based on deep generative adversarial networks are also very welcome.

Main Topics include:

Topics of interest for this special issue include, but are not limited to:

Representation learning methods and theory;

Adversarial representation learning for domain adaptation;

Network interpretability in adversarial learning;

Adversarial feature learning;

RL and GAL for data augmentation and class imbalance;

New GAN models and new GAN learning criteria;

RL and GAL in classification;

Adversarial reinforcement learning;

GANs for noise reduction;

Recurrent GAN models;

GANs for imitation learning;

GANs for image segmentation and image completion;

GANs for image super-resolution;

GANs for speech and audio processing

GANs for object detection;

GANs for Internet of Things;

RL and GANs for image and video synthesis;

RL and GANs for speech and audio synthesis;

RL and GANs for text to audio or text to image synthesis;

RL and GANs for inpainting and sketch to image;

RL and GAL in neural machine translation;

RL and GANs in other application domains.

人工智能 Computer Vision and Image Understanding Special Issue on "Adversarial Deep Learning in Biometrics & Forensics'

全文截稿: 2019-10-01 影响因子: • 小类 : 计算机:人工智能 - 3区 • 小类 : 工程:电子与电气 - 3区 网址:

SCOPE

In the short course of a few years, deep learning has changed the rules of the game in a wide array of SCIentific diSCIplines, achieving state-of-the-art performance in major pattern recognition application areas. Notably, it has been used recently even in fields like image biometrics and forensics (e.g. face recognition, forgery detection and localization, source camera identification, etc).

However, recent studies have shown their vulnerability to adversarial attacks: a trained model can be easily deceived by introducing a barely noticeable perturbation in the input image. Such a weakness is obviously more critical for security-related applications calling for possible countermeasures. Indeed, adversarial deep learning will create high impact in the field of Biometrics and Forensics in the near future.

The aim of this special issue is hence to gather innovative contributions on methods able to resist adversarial attacks on deep neural networks applied both in image biometrics and forensics. Therefore, it will encourage proposals of novel approaches and more robust solutions.

TOPICS

Submissions are encouraged, but not limited, to the following topics:

Adversarial biometric recognition

Attack transferability in biometric applications

Physical attacks in biometric authentication systems

Attacks to person re-identification systems

Poisoned enrollment datasets

Multimodal biometric systems as a defense

Blind defense at test time for forensic and biometric systems

Novel counter-forensics methods

Design of robust forgery detectors

Adversarial patches in forensic applications

Image anonymization

Adversarial attack and defense in video forensics

Steganography and steganalysis in adversarial settings

Cryptography-based methods

人工智能 Engineering Applications of Artificial Intelligence Special Issue on Pushing Artificial Intelligence to Edge: Emerging Trends, Issues and Challenges

全文截稿: 2019-11-15 影响因子: • 大类 : 工程技术 - 2区 • 小类 : 自动化与控制系统 - 3区 • 小类 : 计算机:人工智能 - 3区 • 小类 : 工程:电子与电气 - 3区 • 小类 : 工程:综合 - 2区 网址:

Driven by the Internet of Things (IoT), a new computing model - Edge computing - is currently evolving, which allows IoT data processing, storage and service supply to be moved from Cloud to the local Edge devices such as smart phones, smart gateways or routers and base stations that can offer computing and storage capabilities on a smaller scale in real-time. EoT pushes data storage, computing and controls closer to the IoT data source(s); therefore, it enables each Edge device to play its own role of determining what information should be stored or processed locally and what needs to be sent to the Cloud for further use. Thus, EoT enables IoT services to meet the requirements of low latency, high scalability and energy efficiency, as well as to mitigate the traffic burdens of the transport network.

However, current expansion of the IoT and digital transformation is generating new demands on computing and networking infrastructures across all industries (automotive, aerospace, life safety, medical, entertainment and manufacturing, etc). Hence, it is becoming challenging for Edge computing to deal with these emerging IoT environments. In order to overcome this issue, there is a need for intelligent Edge or Artificial Intelligence (AI) powered Edge computing (Edge-AI) to manage all the new data needs from these sectors. AI with its machine learning (ML) abilities can be fused into Edge to extend its power for intelligently investigating, collecting, storing and processing the large amounts of IoT data to maximize the potential of data analytics and decision-making in real time with minimum delay. There are many application areas where Edge-AI can be used, such as fall detection systems for the elderly, intelligent clothes for safety applications, smart access systems, smart camera, smart fitness systems, pet monitoring systems, self-predictive electric drives, and so on.

While researchers and practitioners have been making progress within the area of Edge-AI, still there exist several challenging issues that need to be addressed for its large-scale adoption. Some of these issues are: credibility and trust management, distributed optimization of multi-agent system in Edge, self-organization, self-configuration, and self-discovery of edge nodes, lack of standards in containerization area (Docker, Open Container Initiative etc.) for Edge-AI, security risk for the data that needs to be processed at the edge, lack of efficient scheduling algorithms to optimize AI or machine learning in Edge computing structure, new operating system for edge artificial intelligence, etc.

This special issue targets a mixed audience of researchers, academics and industries from different communities to share and exchange new ideas, approaches, theories and practice to resolve the challenging issues associated with the leveraging of intelligent Edge paradigm. Therefore, the suggested topics of interest for this special issue include, but are not limited to:

Novel middleware support for Edge intelligence

Network function virtualization technologies that leverage Edge intelligence

Trust, security and privacy issues for Edge-AI

Distributed optimization of multi agent systems for Edge intelligence

Self-organization, self-configuration, and self-discovery of Edge node

Semantic interoperability for Edge intelligence

Autonomic resource management for Edge-AI

Mobility, Interoperability and Context-awareness management for Edge-AI

Container based approach to implement AI in Edge

Applications/services for Edge artificial intelligence

New operating system for Edge intelligence

5G-enabled services for Edge intelligence

Software and simulation platform for Edge AI

AI, Blockchain and Edge computing

人工智能 Computer Vision and Image Understanding Special Issue on Deep Learning for Image Restoration

全文截稿: 2019-12-15 影响因子: • 小类 : 计算机:人工智能 - 3区 • 小类 : 工程:电子与电气 - 3区 网址:

Scope

Recent years have witnessed significant advances in image restoration and related low-level vision problems due to the use of kinds of deep models. The image restoration methods based on deep models do not need statistical priors and achieve impressive performance. However, there still exist several problems. For example, 1) synthesizing realistic degraded images as the training data for neural networks is quite challenging as it is difficult to obtain image pairs in real-world applications; 2) as the deep models are usually based on black-box end-to-end trainable networks, it is difficult to analyze which parts really help the restoration problems; 3) using deep neural networks to model the image formation process is promising but still lacks efficient algorithms; 4) the accuracy and efficiency for real-world applications still see a large room for improvement.

This special issue provides a significant collective contribution to this field and focuses on soliciting original algorithms, theories and applications for image restoration and related low-level vision problems. Specifically, we aim to solicit the research papers that 1) propose theories related to deep learning for image restoration and related problems; 2) develop state-of-the-art algorithms for real-world applications; 3) present thorough literature reviews/surveys about the recent progress in this field; 4) establish real-world benchmark datasets for image restoration and related low-level vision problems.

Topics

Topics of interest include, but are not limited to:

Theory:

Deep learning

Generative adversarial learning

Weakly supervised learning

Semi-supervised learning

Unsupervised learning

Algorithms and applications:

Image/video deblurring, denoising, super-resolution, dehazing, deraining, etc.

Image/video filtering, editing, and analysis

Image/video enhancement and other related low-level vision problems

Low-quality image analysis and related high-level vision problems

人工智能 Image and Vision Computing Special Issue on Novel Insights on Ocular Biometrics In Image and Vision Computing

全文截稿: 2019-12-31 影响因子: • 小类 : 计算机:人工智能 - 3区 • 小类 : 计算机:软件工程 - 2区 • 小类 : 计算机:理论方式 - 3区 • 小类 : 工程:电子与电气 - 3区 • 小类 : 光学 - 3区 网址:

Notwithstanding the enormous potential of the traits in the ocular region for biometric applications, this line of research still raises several open issues, which justifies the ongoing research efforts. For instance, the relatively recent emergence of the periocular and sclera traits makes it worth recording the progress of this area. Also, all the traits underlying ocular biometrics and their possible combination still need to be more thoroughly investigated, not only to improve recognition robustness, but also to perceive the potential of this kind of traits to play a significant role in solving emerging problems in the biometrics domain, such as "systems interpretability', "weakly/partial supervised recognition' or "forensics evidence and biometric recognition'. This special issue aims at providing a platform to publish and record the recent research on ocular biometrics in order to push the border of the state-of-the-art.

Topics of interest include, but are not limited to:

· Ocular biometrics at-a-distance and in-the-wild;

· Ocular biometric beyond texture feature

· Ocular biometrics in mobile environments;

· Segmentation, enhancement issues of ocular biometrics;

· Interpretability in ocular biometrics;

· Weakly supervised ocular biometric recognition;

· Liveness of ocular biometrics;

· Adaptability of ocular biometrics;

· Databases on ocular biometrics;

· Ocular biometrics for forensics applications;

· Ocular biometrics for classifying gender, age, ethnicity;

· Ocular biometrics for newly born or twins;

· Fusion of different ocular biometrics like sclera, iris, periocular region etc.

人工智能 Neurocomputing Special Issue on Human Visual Saliency and Artificial Neural Attention in Deep Learning

全文截稿: 2020-01-10 影响因子: • 大类 : 工程技术 - 2区 • 小类 : 计算机:人工智能 - 2区 网址:

Human visual system canprocess large amounts of visual information (108-109bits per second) in parallel. Such astonishing ability is based on the visual attention mechanism which allows human beings to selectively attend to the most informative and characteristic parts of a visual stimulus rather than the whole scene. Modeling visual saliency is a long-term core topic in cognitive psychology and computer vision community. Further, understanding human gaze behavior during social scenes is essential for understanding Human-Human Interactions (HHIs) and enabling effective and natural Human-Robot Interactions (HRIs). In addition, the selective mechanism of human visual system inspires the development of differentiable neural attention in neural networks. Neural networks with attention mechanism are able to automatically learn to selectively focus on sections of input, which have shown wide success in many neural language processing and mainstream computer vision tasks, such asneural machine translation, sentence summarization, image caption generation, visual question answering, and action recognition. The visual attention mechanism also boosts biologically-inspired object recognition, including salient object detection, object segmentation, and object classification.

The list of possible topics includes, but is not limited to:

Visual attention prediction during static/dynamic scenes

Computational models for saliency/co-saliency detection in images/videos

Computational models for social gaze, co-attention and gaze-following behavior

Gaze-assistant Human-Robotics Interaction (HRI) algorithms and gaze-based Human-Human Interaction (HHI) models

Neural attention based NPL applications (e.g., neural machine translation, sentence summarization, etc)

Approaches for attention-guided object recognition, such as object classification, object segmentation and object detection.

Visual saliency for various applications (e.g., object tracking, human-machine interaction, and automatic photo editing, etc.)

Artificial attention and multi-modal attention based applications (e.g., network knowledge distillation, network visualization, image captioning, and visual question answering, etc.)

New benchmark datasets and evaluation metrics related to the aforementioned topics

人工智能 Applied Soft Computing Special Issue on Immune Computation: Algorithms & Applications

全文截稿: 2020-01-15 影响因子: • 大类 : 工程技术 - 2区 • 小类 : 计算机:人工智能 - 2区 • 小类 : 计算机:跨学科应用 - 2区 网址:

I. AIM AND SCOPE

Immune Computation, also known as "Artificial Immune System", is a fast developing research area in the computational intelligence community, inspired by the information processing mechanism of biological immune system. Many of these algorithms are built on solid theoretical foundations, through understanding mathematical models and computational simulation of aspects of the immune system. The scope of this research area ranges from modeling to simulation of the immune system, to the development of novel engineering solutions to complex problems, and bridges several diSCIplines to provide new insights into immunology, computer SCIence, mathematics and engineering.

This special issue is an activity of the IEEE CIS Task Force on Artificial Immunes Systems. The aims of this special issue are: (1) to present the state-of-the-art research on Artificial Immune Systems, especially the immune-based algorithms for real-world applications; (2) to provide a forum for experts to disseminate their recent advances and views on future perspectives in the field.

II. THEMES

Following the development of AISs, the topics of this special issue will focus on the novel immune algorithms and their real-world applications. Topics of interest include, but are not limited to:

1. Immune algorithms

Clonal selection algorithms

Immune network algorithms

Dendritic cell algorithms

Negative/positive selection algorithms

Negative representations of information

Hybrid immune algorithms

Novel immune algorithms

2. Applications

Immune algorithms for optimization, including multi-objective optimization, dynamic and noisy optimization, multimodal optimization, constrained optimization, large scale optimization

Immune algorithms for security, including intrusion detection, anomaly detection, fraud detection, authentication

Immune-based privacy protection schemes as well as sensitive data collection

Immune-based data mining techniques

Immune algorithms for pattern recognition

Immune algorithms for robotics and control

Immune algorithms for fault diagnosis

Immune algorithms for big data

Immune algorithms bioinformatics

人工智能 Artificial Intelligence Special Issue on Explainable Artificial Intelligence

全文截稿: 2020-03-01 影响因子: 3.034 CCF分类: A类 • 大类 : 工程技术 - 2区 • 小类 : 计算机:人工智能 - 2区 网址:

As intelligent systems become more widely applied (robots, automobiles, medical & legal decision-making), users and the general public are becoming increasingly concerned with issues of understandability and trust. The current generation of Intelligent systems based on machine learning seem to be inscrutable. Consequently, explainability has become an important research topic, in both research literature and the popular press. These considerations in the public discourse are partly responsible for the establishment of projects like DARPA's Explainable AI Project, European response to the General Data Protection Regulation, and the recent series of XAI Workshops at major AI conferences such as IJCAI. In addition, because "Explainability" is inherently about helping humans understand intelligent systems, XAI is also gaining interest in the human computer interaction (HCI) community.

The creation of explainable intelligent systems requires at least two major components. First, explainability is an issue of human-AI interaction; and second, it requires the construction of representations that support the articulation of explanations. The achievement of Explainable AI requires interdiSCIplinary research that encompasses Artificial Intelligence, social SCIence, and human-computer interaction.

A recent survey published in AIJ () shows that there is a rich understanding in philosophy and cognitive & social psychology of how humans explain concepts to themselves and to others. This work addresses the development of a framework for the first issue noted above: what counts as an explanation to support the HCI aspects of XAI. The second challenge of building models to support explanation (especially in intelligent systems based on aspects of machine learning) is more scattered, ranging from from recursive application of deep learning all the way to the induction of logical causal models.

This special issue seeks contributions on foundational studies in Explainable Artificial Intelligence. In particular, we seek research articles that address the fact that explainable AI is both a technical problem and a human problem, and SCIentific work on explainable AI must consider that it is ultimately humans that need to understand technology.

The importance of the topic of Explainable AI is manifested by the number of conferences and conference sessions that on the topic that have been announced in recent months, along with calls for reports on explainability in specialized areas, such as robotics, planning, machine learning, optimisation, and multi-agent systems.

Topics

Human-centric Explainable AI:Submissions with the flavor of both an AI research report and a report on a human-behavioural experiment are of particular interest. Such submissions must convey details of the research methods (experimental designs, control conditions, etc.). There should be a presentation of results that adduce convincing empirical evidence that the XAI processes achieve genuine success at explaining to its intended users.

Theoretical and Philosophical Foundations:We invite submissions on the philosophical, theoretical or methodological issues in Explainable AI (XAI). In particular, we encourage submissions that go beyond standard issues of interpretability and casual attribution, and into foundations of how to provide meaningful insights from AI models that are useful for people other than computer SCIentists.

Explainable Black-box Models:We invite submissions that investigate how to provide meaningful and actionable insights on black-box models, especially machine learning approaches using opaque models such as deep neural networks. In particular, we encourage submissions that go beyond the extraction of interpretable features; for example, considering explanation as a process, building user mental models, contrastive explanation, etc.

Knowledge Representation and Machine Learning:Submissions that investigate the use of knowledge representation techniques, including user modelling, abductive reasoning, diagnosis, etc., are of interest. In particular, we encourage submissions that capitalize on the strengths of knowledge representation and explainable machine learning.

Interactive Explanation:Submissions are of interest if they report on research in which human users or learners interact with intelligent systems in an explanation modality, which leads to improvement in the performance of the human-machine work system. Submissions that regard explanation as an exploratory, interactive process are of particular interest. This is in contrast with the model that considers explanation as a one-way paradigm.

Historical Perspectives:One additional topic of particular interest is Expert Systems, since many of the current issues of interpretability, explainability, and explanation as a process first emerged in the era of Expert Systems. Brief historical retrospections on the fundamental problems are encouraged.

Case Study Reports:We invite short reports outlining case studies illustrating the consequences of a lack of explainability in intelligence systems, with the aim of providing motivating examples/benchmarks and challenge problems for the community.

广告 返回搜狐,查看更多

责任编辑:

 

 

更多科研论文服务,动动手指,请戳 论文润色投稿期刊推荐论文翻译润色论文指导及修改论文预审

语言不过关被拒?美国EditSprings--专业英语论文润色翻译修改服务专家帮您!

上一篇:sci论文润色一般多少钱?能选择低价润色吗?

下一篇:艾德思:可靶向优先处理难降解有机物的新型湿地表层基质研究获进展

特别声明:本文转载仅仅是出于传播信息的需要,并不意味着代表本网站观点或证实其内容的真实性;如其他媒体、网站或个人从本网站转载使用,须保留本网站注明的“来源”,并自负版权等法律责任;作者如果不希望被转载或者联系转载稿费等事宜,请与我们接洽。

凡注明来源为“EditSprings”的论文,如需转载,请注明来源EditSprings并附上论文链接。

最热论文