It has been quite a long time, but I finally managed to post about AICON 2019, which is a global AI conference held at Yangjae R&D Innovation Hub, in Dec 17th, 2019.
It was my first time to participate in an official AI conference, so I was really excited about it.
The posts below are the contents I wrote while I listened to the several lectures.
They might be not well-organized since I wrote a little bit randomly while concentrating presentations and taking pictures.
Malware Analysis using Artificial Intelligence and Deep Learning
(by Mamoun Alazab, Associate Professor, Charles Darwin University)
- It is cruical for the cyber security to obtain data and professionalism. In other words, the understanding of AI leads to the prevention of many crimes using AI.
- Since the cyber crime has low risk but high benefit, it becomes a huge problem all over the world. And it is difficult to detect and track it because attacks are available from any place around the world.
- Encoding codes by obfuscation can be decoded with Artificial Intelligence technology along with the reduction of time and cost.
- A new innovation in the field of cyper security can occur with a lot of data and high computing performances. Automatic works by AI, originally conducted by human workers, can predict and check crimes more efficiently.
Research and Development of FAT Artificial Intelligence
(by Chang Dong Yoo, professor in Electric Engineering, KAIST)
- FAT stands for “Fair, Accountable, Transparent”.
- It is necessary to eliminate biases, since AI happens to make a decision biased by race, sex, ideas etc. For instance, it can think that the probability of conducting a crime by an innocent African American based on his/her race or it is possible that it excludes women without proper reasons in recruiting processes.
- These sensitive information should not be interrupt the performances. But it cannot be solved in a naive way, such as simply excluding these kind of features. The relation between the accuracy and minimizing unnecessary biases is trade-off.
Mathmatical understanding of AI’s geometric structure and the application to medical image restoring
(by Jong Chul Ye, professor in Bio and Brain Engineering, KAIST)
- Using a DNN model to elimiate noises/blurs in videos can be actively used in the field of CT.
- The image domain learning is a basic deep learning technique and it is usually applied to the noise elimination. The hybrid solution checks if the data is restored properly after passing the data itself along with the calculation through the network, which is simliar to Resnet. Automap which goes through FC NN is a good example of the domain transform learning and it requires a number of calculations.
- Many people says that AI is a black box so that we cannot interpret it. But maybe we cannot analyze it because it’s too simple. It makes far better results with simple principles than many complicate mathmatical algorithms do.
- Why is the same architecture able to be applied into many other problems? We can say that a neural network makes a lot of spaces that can be assigned to different categories. Specifically, these spaces are divided over and over again and this leads to the solutions for other tasks.
Explanable AI for satelite image anlaysis
(by Tae Kyun Jeon, CEO, SIA)
- Even if we analyze images effectively, is it possible to say that this analysis resulted from proper decisions/logics? That’s why we need an explanable AI. We need to focus on “Why the AI makes this particular decision”.
- There are several attempts to explain AI models generalizing the understanding of what representation an AI model focuses on and which part of data is activated.
- The example-based method apprehands from which each example is affected with emphasis and gives scores. And the attribution-based method checks which part of a neuron is activated after making hitmaps to find out which features contribute to the process.
The relation between Tensorflow 2.0 and Keras and the upcoming future
(by Young Ha Lee, Researcher and Developer, Dplus)
- Tensorflow2.0 is released. From now on, Keras becomes a part of fundamental features in Tensorflow. If we need to build a model, we can just use
tf.keras
namespace. - There have been several libraries for deep learning using Assembly or C++ but they are quite ineffective. Then Keras was released and Tensorflow came as a backend framework. And eventually it became the basic backend for Keras. Finally Keras was declared as an official API for Tensorflow. As Francois Challet said, Keras is same with
tf.keras
. Now there is no support for multi-backend and it is enough to use Tensorflow 2.0 andtf.keras
.
Overall, this conference was quite a decent event for various AI startups and researchers.
I was just a student, so I could not understand whole contents which I obtained from this conference.
But I could notice that my country is trying to enhance and support AI startups and AI industry.
I decided to visit various conferences if I have a chance to improve my insights and experiences.