By navigating our site, you agree to allow us to use cookies, in accordance with our Privacy Policy.

Fujitsu Develops World’s First AI Technology

Expected to contribute to improved accuracy for a variety of AI technologies

New technology developed for high-dimensional data, including images, network access, and medical data  Tested against international benchmarks for detecting anomaly data in different fields, achieving state-of-the-art accuracy with up to 37% improvement over error rates of anomaly detection for conventional deep-learning techniques.

Fujitsu hopes to apply in future to improve accuracy for a variety of AI technologies. Fujitsu Laboratories announces to have developed the world’s first AI technology that accurately captures essential features, including the distribution and probability of high-dimensional data in order to improve the accuracy of AI detection and judgment.

Fig 1 (Example of error detection): Incorrect decisions due to unquantified empirical methodsy
fig 2
Fig 2 Improvement of error rate when this technology is applied to abnormality detection
Fig. 3 Theoretical framework for acquisition of distribution and probability faithful to data characteristics inspired by information compression technology
Fig. 4 Deep learning technology to obtain dimensional reduction transformation distribution/probability

High-dimensional data, which includes communications networks access data, types of medical data, and images remain difficult to process due to its complexity, making it a challenge to obtain the characteristics of the target data. Until now, this made it necessary to use techniques to reduce the dimensions of the input data using deep learning, at times causing the AI to make incorrect judgments.

Fujitsu has combined deep learning technology with its expertise in image compression technology, cultivated over many years, to develop an AI technology that makes it possible to optimize the processing of high-dimensional data with deep learning technology, and to accurately extract data features. It combines information theory used in image compression with deep learning, optimizing the number of dimensions to be reduced in high-dimensional data and the distribution of the data after the dimension reduction by deep learning.

Akira Nakagawa, (Associate fellow) of Fujitsu Laboratories commented, “This represents an important step to addressing one of the key challenges in the AI field in recent years: capturing the probability and distribution of data. We believe that this technology will contribute to performance improvements for AI, and we’re excited about the possibility of applying this knowledge to improve a variety of AI technologies.”

Details of this technology will be presented at the International Conference on Machine Learning “ICML 2020 (International Conference on Machine Learning 2020)” on Sunday, July 12.

Further info:


Niloy Banerjee

A generic movie-buff, passionate and professional with print journalism, serving editorial verticals on Technical and B2B segments, crude rover and writer on business happenings, spare time playing physical and digital forms of games; a love with philosophy is perennial as trying to archive pebbles from the ocean of literature. Lastly, a connoisseur in making and eating palatable cuisines.

Related Articles