English English
简体中文 简体中文

Cart

Please add the product to your cart
Go to the store
    Subtotal:

    PROGRAM & INITIATIVES

    Assessment and Certification

    Artificial Intelligence Safety, Trust & Responsibility

    PROGRAM & INITIATIVES

    AI STR Certification

    Artificial Intelligence Safety, Trust & Responsibility

    Artificial Intelligence Safety, Trust, and Responsibility (AI STR) Certification is a global program created by the World Digital Technology Academy (WDTA) based on the WDTA AI STR international standards. It provides a clear framework for ensuring that AI systems are secure, trustworthy, and responsibly governed.

    The certification assesses both technical safeguards and ethical and legal compliance, helping organizations manage risks such as data protection, fairness, and broader societal impact. Achieving AI STR Certification demonstrates a commitment to responsible AI and strengthens trust with users, partners, and regulators. The program supports the global development of AI that is safe, transparent, and aligned with public interest.

    AI Application

    Evaluating and validating the performance of generative artificial intelligence (GenAI) applications in terms of safety, trust, and accountability

    Large Language Model

    Establish rigorous security standards and evaluation procedures to ensure large language models resist adversarial attacks, reduce risks, and promote responsible use.

    AI Agent

    The types of agent risks were further categorized, and testing methods such as model inspection, network communication analysis, and tool fuzz testing were improved and innovatively proposed.

    PROGRAM & INITIATIVES

    Generative AI Application Security Certification

    The Generative AI Application Security Certification is conducted based on the WDTA AI STR-01 standard.

    The Generative AI Application Security Certification is conducted based on the WDTA AI STR-01 standard. It aims to assess and validate the performance of generative artificial intelligence (GenAI) applications in terms of security, trustworthiness, and responsibility. The certification covers multiple layers of testing and validation standards, including base model selection, embeddings and vector databases, prompt execution, agent behavior, fine-tuning, large model response processing testing, and runtime security of AI applications.

    This certification is designed to ensure the security and reliability of AI applications throughout their lifecycle. It provides developers and organizations with a set of standards and guidelines to enhance the security of AI applications, mitigate potential security risks, improve overall quality, and promote the responsible development and deployment of AI technology.

    Basic model selection
    Embedding and Vector Database
    Quick execution
    Proxy behavior
    The runtime security of artificial intelligence applications.
    Large scale model response processing testing
    fine-tuning

    PROGRAM & INITIATIVES

    Certification Basis

    WDTA AI STR-01 Generative AI Application Security Testing and Validation Standard

    Generative AI Application Security Testing and Validation Standard

    Target

    Applications built on generative artificial intelligence (from LLM to GenAI)

    This certification addresses key areas:

    Base Model Selection

    Prompt and Knowledge Retrieval (RAG)

    Embedding and Vector Database

    Prompt Execution/Inference

    Agentic Behaviors

    Fine-Tuning

    Response Handling

    PROGRAM & INITIATIVES

    Large Language Model Security Certification

    Based on the WDTA AI STR-02 Large Language Model Security Testing Method

    The Large Language Model Security Certification is conducted based on the WDTA AI STR-02 standard. It aims to ensure that large language models can withstand various adversarial attacks, reduce potential risks, and promote their responsible application in real-world scenarios by establishing strict security testing standards and evaluation procedures. The certification includes comprehensive evaluations of models facing various attack types during pre-training, fine-tuning, and inference stages (such as data poisoning, backdoor attacks, instruction hijacking, and prompt obfuscation).

    By setting clear evaluation metrics such as attack success rate (R) and refusal rate (D), the certification provides organizations with a scientific and systematic security detection method, helping them identify and fix potential vulnerabilities in the models, thereby enhancing the overall security and reliability of the models.

    Additionally, the evaluation emphasizes protective measures for large language models in terms of data privacy, model integrity, and contextual applicability, ensuring that they can produce safe and reliable results when faced with adversarial examples.

    PROGRAM & INITIATIVES

    Certification Basis

    WDTA AI STR-02 Large Language Model Security Testing Method

    Target

    Large Language Models

    Large Language Model Security Testing Method

    PROGRAM & INITIATIVES

    Value of LLM Security Certification

    Enhancing Security and Reliability

    Through rigorous security testing and evaluation, enterprises can identify and repair potential vulnerabilities in large language models, significantly improving the overall security and reliability of the models and reducing the risk of adversarial attacks.

    Reducing Operational Risks

    Through systematic security detection and protective measures, enterprises can effectively reduce operational risks caused by model security issues, avoiding potential economic losses and brand damage.

    Promoting Responsible Use

    Helps organizations consider social impact and ethical issues more thoroughly when designing and deploying AI systems, thereby promoting the responsible use of AI technology.