At the Swin BIC 2024 conference, with the theme of โ€œData Safety in the AI Era,โ€ Mr. Ngo Tuan Anh, Chairman of SCS Smart Cyber Security JSC, shared that the rapid development of AI has increased the value of data exponentially.

AI relies on data to learn, analyze, and make decisions. The more diverse and rich the data is, the smarter and more accurate AI models become. Data has become an extremely valuable asset.

With such importance, safeguarding data safety is a critical mission and a challenging task. โ€œThis is not only an issue for Vietnam but also for many other countries globally, as data breaches are becoming increasingly prevalent and sophisticated,โ€ said Mr. Ngo Tuan Anh.

ONLY 11% OF RISK MANAGEMENT EXPERTS ARE CONFIDENT IN THEIR ABILITY TO DEPLOY SAFE AI

In a conversation with the conference speaker, Mr. Doan Huu Hau, Director of Digital Transformation and AI Services at FPT Digital, revealed that due to its complex nature, AI systems can be vulnerable to attacks even during the model training phase. This can be achieved by manipulating data and algorithms, leading to inaccurate results.

During the deployment and usage phases, attacks can target the model using adversarial data to deceive or bias it and extract sensitive information. Additionally, AI systems may also be subject to disruption attacks, affecting the overall systemโ€™s performance and reliability.

โ€œData poisoningโ€ is a potential risk during the data collection process, where inaccurate data is introduced, critical data is deleted, or data labels are altered, causing the model to learn from incorrect information. Attackers can modify training algorithms, hyperparameters, or neural network structures, resulting in unreliable output.

The consequences of attacks on AI systems can be severe. Firstly, AI models may make biased decisions, negatively impacting businesses and individuals. These errors also lead to a loss of trust in AI, reducing its acceptance and application in society.

Attacks on AI models can result in significant economic losses, including system repairs and damages caused by erroneous decisions. These consequences emphasize the importance of security and risk management during AI system development and deployment.

According to Mr. Doan Huu Hau, a global survey of risk management experts found that 58% of respondents believed that AI poses significant risks and could lead to severe and unexpected consequences within the next two years for businesses, individuals, and society.

While the surveyed experts emphasized enhancing AI system security, only 11% felt confident in their ability to deploy AI safely and manage associated risks effectively.

โ€œThis doesnโ€™t imply that the capabilities of security professionals are inadequate, but it reflects a practical issue: AI systems are facing increasingly sophisticated threats throughout their lifecycle,โ€ explained Mr. Doan Huu Hau.

5 CRUCIAL PRINCIPLES FOR BUILDING A SECURE AI SYSTEM

Watsonx Assistant, an AI system developed by IBM, is a healthcare application that assists doctors in diagnosis, examination, and prescription. Watsonx leverages machine learning (ML) and natural language processing (NLP) and is built on principles that ensure safety and reliability.

The success of Watsonx Assistant is attributed to seven safety factors, including advanced end-to-end encryption to protect both static and transit data, strict access control, compliance with international standards (GDPR, HIPAA), and human oversight, especially in critical decision-making.

As businesses embark on their AI journey, adhering to data protection principles is essential. Mr. Doan Huu Hau outlined five crucial principles for constructing a secure AI system.

The first principle is safety by design, implying that AI systems must be created and designed with built-in security measures, integrating information security features, risk assessment, and compliance with international information security standards from the outset.

The second principle emphasizes transparency and explainability. For GenAI systems, capable of generating new information based on trained knowledge, transparency is vital. GenAI can provide answers and generate information even without available data, so the design and deployment must ensure that the decision-making process is clear, understandable, auditable, and easily interpretable by users.

The third principle focuses on attack resistance, encompassing the ability to withstand attacks, detect intrusions, and respond promptly to incidents.

The fourth principle is data protection, involving data encryption, strict access management, and data redundancy elimination to prevent interference and maintain accuracy. Ensuring that only authorized individuals with the appropriate responsibilities and permissions can access data at the right time is crucial.

The final principle is diversity and fairness, aiming to prevent bias in the AI systemโ€™s knowledge base. If the input data is biased, the AI system may become biased, impacting its outcomes and decisions.

โ€œAI systems are susceptible to information security risks during their development, training, and usage. The consequences of information security breaches can be far-reaching, affecting data privacy, user behavior, finances, and the reputation of individuals and organizations,โ€ warned Mr. Doan Huu Hau.

RECOMMENDATIONS FOR BUSINESSES TO SAFELY ADOPT AI

Currently, some Vietnamese businesses have taken initial steps toward AI adoption, but the level of preparedness to ensure the security of these systems varies.

To build and operate a secure AI system, businesses should focus on data quality and data information security.

According to experts, strategically, large enterprises have recognized the importance of AI and are implementing long-term plans to integrate it into their business processes. However, not all businesses have a clear strategy, especially regarding data and system security. Small and medium-sized enterprises, despite their interest in AI, may not have the necessary technological infrastructure to build secure AI systems.

Mr. Doan Huu Hau recommended that businesses focus on both data quality and data information security to build and operate secure AI systems. The input data directly impacts the effectiveness and reliability of AI systems.

Data information security should be maintained throughout the data collection, processing, storage, and sharing processes. Measures to ensure data security include data encryption, implementing the โ€œright person, right responsibility, right timeโ€ access procedure, and deploying continuous monitoring systems to detect potential threats promptly.

In addition to technical measures, the human factor plays a crucial role in ensuring the security of AI systems. Each employee within an organization should be trained on the importance of data security and basic security measures. A strong culture of data security should be established to ensure that all employees understand and comply with information security regulations.

Lessons from pioneering businesses indicate that success in building secure AI systems relies not only on technology but also on long-term strategies, a focus on data quality, and heightened security awareness throughout the organization.

According to Vietnamโ€™s National Strategy for Research, Development, and Application of Artificial Intelligence until 2030, AI will become a critical technology field for the country by 2025. Vietnam aims to be among the top 5 countries in ASEAN and the top 60 countries globally in AI research, development, and application by 2030.

You may also like

VNG Company: More than 163 million customer accounts exposed, says Ministry of Public Security

With the prevalence of personal data breaches in the online space, the Ministry of Public Security has provided concrete evidence of a specific incident where VNG Company exposed over 163 million customer accounts.